[petsc-users] petsc4py and PCMGGetSmoother

2024-05-28 Thread Klaij, Christiaan
I'm attempting some rapid prototyping with petsc4py. The idea is basically to 
read-in a matrix and rhs, setup GAMG as ksppreonly, get the smoother and 
overrule it with a python function of my own, similar to the demo where the 
Jacobi method is user-defined in python. So far I have something like this:

pc = PETSc.PC().create()
pc.setOperators(A)
pc.setType(PETSc.PC.Type.GAMG)
pc.PCMGGetSmoother(l=0,ksp=smoother)

which triggers the following error:

AttributeError: 'petsc4py.PETSc.PC' object has no attribute 'PCMGGetSmoother'

1) Am I doing something wrong, or is this function just not available through 
python? 

2) How can I tell up front if a function is available, only if it is listed in 
libpetsc4py.pyx? 

3) Given the function description in C from the manual pages, how can I deduce 
the python syntax?
(perhaps it's supposed to be ksp = pc.PCMGGetSmoother(l=0), or something 
else entirely)

Thanks for your help,
dr. ir. Christiaan Klaij
 | Senior Researcher | Research & Development
T +31 317 49 33 44 |  c.kl...@marin.nl | 
https://urldefense.us/v3/__http://www.marin.nl__;!!G_uCfscf7eWS!fuGSX2r2dc6Uj05gGWmXIUBhKmH4GUWf4FdVk1fonAxLxhy9cN2iLnXPgauopyAAtW08jyABFGurfeZ-5GDS9n8$
 


Re: [petsc-users] something wrong with digest?

2020-09-25 Thread Klaij, Christiaan

That could be the reason, there were some lengthy emails indeed. But the 
attachments are removed in Digest.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl


From: Satish Balay 
Sent: Friday, September 25, 2020 9:02 AM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] something wrong with digest?

I see one of the mailman settings is:

How big in Kb should a digest be before it gets sent out? 0 implies no maximum 
size.
(Edit digest_size_threshhold)
40

I don't think we ever changed this value [so likely a default]

And I see a bunch of e-mails on the list exceeding this number. Perhaps this is 
the reason for this many digests

[There is no archive of the digests (that I can find) - so can't verify]

If this is the case - it must happen every time there is an e-mail with 
attachments to the list..

Satish

On Fri, 25 Sep 2020, Klaij, Christiaan wrote:

>
> Today I got more than 20 petsc-user Digest emails in my inbox (Vol 141, Issue 
> 78 to 101), many containing only one message and being sent within a few 
> minutes of each other (10:39, 10:41 10:44, ...). Is this really how Digest is 
> supposed to work?
>
> Chris
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
>



[petsc-users] something wrong with digest?

2020-09-25 Thread Klaij, Christiaan

Today I got more than 20 petsc-user Digest emails in my inbox (Vol 141, Issue 
78 to 101), many containing only one message and being sent within a few 
minutes of each other (10:39, 10:41 10:44, ...). Is this really how Digest is 
supposed to work?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl



Re: [petsc-users] How to activate the modified Gram-Schmidt orthogonalization process in Fortran?

2020-09-11 Thread Klaij, Christiaan
Sure, that was the advise 9 years ago in the ancient thread. It's not a big 
problem.


Chris

dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: New publication on noise measurements of a cavitating 
propeller<https://www.marin.nl/news/new-publication-on-noise-measurements-on-a-cavitating-propeller>


From: Zhang, Hong 
Sent: Friday, September 11, 2020 4:57 PM
To: Klaij, Christiaan; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] How to activate the modified Gram-Schmidt 
orthogonalization process in Fortran?

Sorry, we have not done it. Can you use

PetscOptionsSetValue("-ksp_gmres_modifiedgramschmidt", "1")

for now? We'll try to add the fortran binding later.

Hong

________
From: petsc-users  on behalf of Klaij, 
Christiaan 
Sent: Friday, September 11, 2020 7:50 AM
To: petsc-users@mcs.anl.gov 
Subject: Re: [petsc-users] How to activate the modified Gram-Schmidt 
orthogonalization process in Fortran?


Make me feel ancient. Would be nice to have the fortran binding though...

Chris

> --
>
> Message: 1
> Date: Thu, 10 Sep 2020 19:41:30 -0600
> From: Zhuo Chen 
> To: "Zhang, Hong" 
> Cc: "petsc-users@mcs.anl.gov" 
> Subject: Re: [petsc-users] How to activate the modified Gram-Schmidt
> orthogonalization process in Fortran?
> Message-ID:
> 
> Content-Type: text/plain; charset="utf-8"
>
> Hi Hong,
>
> According to that very old thread, KSPGMRESSetOrthogonalization was not
> implemented in Fortran. I did as you suggested and the compiler will tell
> me
>
> undefined reference to `kspgmressetorthogonalization_'
>
> I think I will use the -ksp_gmres_modifiedgramschmidt method. Thank you so
> much!
>

dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl


> On Thu, Sep 10, 2020 at 7:32 PM Zhang, Hong  wrote:
>
> > Zhuo,
> > Call
> > KSPSetType(ksp,KSPGMRES);
> >
> > KSPGMRESSetOrthogonalization(ksp,KSPGMRESModifiedGramSchmidtOrthogonalization);
> > Hong
> >
> > --
> > *From:* Zhuo Chen 
> > *Sent:* Thursday, September 10, 2020 8:17 PM
> > *To:* Zhang, Hong 
> > *Cc:* petsc-users@mcs.anl.gov 
> > *Subject:* Re: [petsc-users] How to activate the modified Gram-Schmidt
> > orthogonalization process in Fortran?
> >
> > Hi Hong,
> >
> > Thank you very much for your help.
> >
> > It seems that if I simply append -ksp_gmres_modifiedgramschmidt the
> > warning goes away. However
> > KSPGMRESSetOrthogonalization(ksp,KSPGMRESModifiedGramSchmidtOrthogonalization,ierr)
> > has another issue.
> >
> > Error: Symbol ?kspgmresmodifiedgramschmidtorthogonalization? at (1) has no
> > IMPLICIT type
> >
> > Is it because the argument is too long? I am using gcc 8.4.0 instead of
> > ifort
> >
> > On Thu, Sep 10, 2020 at 7:08 PM Zhang, Hong  wrote:
> >
> > Zhuo,
> > Run your code with option '-ksp_gmres_modifiedgramschmidt'. For example,
> > petsc/src/ksp/ksp/tutorials
> > mpiexec -n 2 ./ex2 -ksp_view -ksp_gmres_modifiedgramschmidt
> > KSP Object: 2 MPI processes
> >   type: gmres
> > restart=30, using Modified Gram-Schmidt Orthogonalization
> > happy breakdown tolerance 1e-30
> >   maximum iterations=1, initial guess is zero
> >   tolerances:  relative=0.000138889, absolute=1e-50, divergence=1.
> >   left preconditioning
> >   using PRECONDITIONED norm type for convergence test
> > PC Object: 2 MPI processes
> >   type: bjacobi
> > ...
> >
> > You can
> > call 
> > KSPGMRESSetOrthogonalization(ksp,KSPGMRESModifiedGramSchmidtOrthogonalization)
> > in your program.
> >
> > Hong
> >
> > --
> > *From:* petsc-users  on behalf of Zhuo
> > Chen 
> > *Sent:* Thursday, September 10, 2020 7:52 PM
> > *To:* petsc-users@mcs.anl.gov 
> > *Subject:* [petsc-users] How to activate the modified Gram-Schmidt
> > orthogonalization process in Fortran?
> >
> > Dear Petsc users,
> >
> > I found an ancient thread discussing this problem.
&g

Re: [petsc-users] How to activate the modified Gram-Schmidt orthogonalization process in Fortran?

2020-09-11 Thread Klaij, Christiaan

Make me feel ancient. Would be nice to have the fortran binding though...

Chris

> --
>
> Message: 1
> Date: Thu, 10 Sep 2020 19:41:30 -0600
> From: Zhuo Chen 
> To: "Zhang, Hong" 
> Cc: "petsc-users@mcs.anl.gov" 
> Subject: Re: [petsc-users] How to activate the modified Gram-Schmidt
> orthogonalization process in Fortran?
> Message-ID:
> 
> Content-Type: text/plain; charset="utf-8"
>
> Hi Hong,
>
> According to that very old thread, KSPGMRESSetOrthogonalization was not
> implemented in Fortran. I did as you suggested and the compiler will tell
> me
>
> undefined reference to `kspgmressetorthogonalization_'
>
> I think I will use the -ksp_gmres_modifiedgramschmidt method. Thank you so
> much!
>

dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl


> On Thu, Sep 10, 2020 at 7:32 PM Zhang, Hong  wrote:
>
> > Zhuo,
> > Call
> > KSPSetType(ksp,KSPGMRES);
> >
> > KSPGMRESSetOrthogonalization(ksp,KSPGMRESModifiedGramSchmidtOrthogonalization);
> > Hong
> >
> > --
> > *From:* Zhuo Chen 
> > *Sent:* Thursday, September 10, 2020 8:17 PM
> > *To:* Zhang, Hong 
> > *Cc:* petsc-users@mcs.anl.gov 
> > *Subject:* Re: [petsc-users] How to activate the modified Gram-Schmidt
> > orthogonalization process in Fortran?
> >
> > Hi Hong,
> >
> > Thank you very much for your help.
> >
> > It seems that if I simply append -ksp_gmres_modifiedgramschmidt the
> > warning goes away. However
> > KSPGMRESSetOrthogonalization(ksp,KSPGMRESModifiedGramSchmidtOrthogonalization,ierr)
> > has another issue.
> >
> > Error: Symbol ?kspgmresmodifiedgramschmidtorthogonalization? at (1) has no
> > IMPLICIT type
> >
> > Is it because the argument is too long? I am using gcc 8.4.0 instead of
> > ifort
> >
> > On Thu, Sep 10, 2020 at 7:08 PM Zhang, Hong  wrote:
> >
> > Zhuo,
> > Run your code with option '-ksp_gmres_modifiedgramschmidt'. For example,
> > petsc/src/ksp/ksp/tutorials
> > mpiexec -n 2 ./ex2 -ksp_view -ksp_gmres_modifiedgramschmidt
> > KSP Object: 2 MPI processes
> >   type: gmres
> > restart=30, using Modified Gram-Schmidt Orthogonalization
> > happy breakdown tolerance 1e-30
> >   maximum iterations=1, initial guess is zero
> >   tolerances:  relative=0.000138889, absolute=1e-50, divergence=1.
> >   left preconditioning
> >   using PRECONDITIONED norm type for convergence test
> > PC Object: 2 MPI processes
> >   type: bjacobi
> > ...
> >
> > You can
> > call 
> > KSPGMRESSetOrthogonalization(ksp,KSPGMRESModifiedGramSchmidtOrthogonalization)
> > in your program.
> >
> > Hong
> >
> > --
> > *From:* petsc-users  on behalf of Zhuo
> > Chen 
> > *Sent:* Thursday, September 10, 2020 7:52 PM
> > *To:* petsc-users@mcs.anl.gov 
> > *Subject:* [petsc-users] How to activate the modified Gram-Schmidt
> > orthogonalization process in Fortran?
> >
> > Dear Petsc users,
> >
> > I found an ancient thread discussing this problem.
> >
> > https://lists.mcs.anl.gov/pipermail/petsc-users/2011-October/010607.html
> >
> > However, when I add
> >
> > call KSPSetType(ksp,KSPGMRES,ierr);CHKERRQ(ierr)
> > call
> > PetscOptionsSetValue(PETSC_NULL_OPTIONS,'-ksp_gmres_modifiedgramschmidt','1',ierr);CHKERRQ(ierr)
> >
> > the program will tell me
> >
> > WARNING! There are options you set that were not used!
> > WARNING! could be spelling mistake, etc!
> > There is one unused database option. It is:
> > Option left: name:-ksp_gmres_modifiedgramschmidt value: 1
> >
> > I would like to know the most correct way to activate the modified
> > Gram-Schmidt orthogonalization process in Fortran. Thank you very much!
> >
> > Best regards.
> >
> >


Re: [petsc-users] Error with KSPSetUp and MatNest

2019-04-11 Thread Klaij, Christiaan via petsc-users
Just like Manuel Colera Rico, I would like to begin with existing
(sub)matrices and put them directly into a matnest.

You seem to have understood that problem from the archive 2.5
years ago... If my memory is correct, it was an attempt to
create a mat and switch between -mat_type nest and aij on the
command line.


Chris

dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: First autonomous manoeuvring vessel trials held on North 
Sea<http://www.marin.nl/web/News/News-items/First-autonomous-manoeuvring-vessel-trials-held-on-North-Sea.htm>


From: Matthew Knepley 
Sent: Thursday, April 11, 2019 2:16 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Error with KSPSetUp and MatNest

On Thu, Apr 11, 2019 at 7:51 AM Klaij, Christiaan via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Matt,

As a happy MATNEST user, I got a bit worried when you wrote "we
should remove MatCreateNest() completely".

This would not remove any of the Nest functionality, just the direct interface 
to it, which is the problem.

This happened last
time I tried to use AIJ instead:

https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2016-August/029973.html

Has this problem been fixed in the meantime?

This looks like a problem with the Nest implementation, not AIJ? Maybe I am 
misunderstanding it.

   Thanks,

 Matt

Chris

> I think we should remove MatCreateNest() completely. We should have 1
> mechanism for getting submatrices for creation, not 2, so we retain
> MatCreateLocalRef(). Then if -mat_type nest was specified, it gives you
> back an actually submatrix, otherwise it gives a view. This would do
> everything that we currently do without this horrible MatNest interface
> bubbling to the top.
>
>Matt
>


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/42nd-FPSO-JIP-Week-April-812-Singapore.htm



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>



Help us improve the spam filter. If this message contains SPAM, click 
here<https://www.mailcontrol.com/sr/VaxGuPX0yKPGX2PQPOmvUqEZpHzWxmiI7TcWSqBM5JDxEDemOSATW7gNYv05LkIR8iuercSaQ0xkxH4nYzTQYQ==>
 to report. Thank you, MARIN Support Group




Re: [petsc-users] Error with KSPSetUp and MatNest

2019-04-11 Thread Klaij, Christiaan via petsc-users
Matt,

As a happy MATNEST user, I got a bit worried when you wrote "we
should remove MatCreateNest() completely". This happened last
time I tried to use AIJ instead:

https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2016-August/029973.html

Has this problem been fixed in the meantime?

Chris

> I think we should remove MatCreateNest() completely. We should have 1
> mechanism for getting submatrices for creation, not 2, so we retain
> MatCreateLocalRef(). Then if -mat_type nest was specified, it gives you
> back an actually submatrix, otherwise it gives a view. This would do
> everything that we currently do without this horrible MatNest interface
> bubbling to the top.
>
>Matt
>


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/42nd-FPSO-JIP-Week-April-812-Singapore.htm



Re: [petsc-users] PetscPartitioner is missing for fortran

2018-11-27 Thread Klaij, Christiaan via petsc-users
Personally, I never try to fix these things by myself, it's the
job of the petsc developers, they know best and can make the fix
available for all users. As a user, I just give feedback whenever
I encounter a problem (which isn't often).

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/DNVGL-certification-for-DOLPHIN-simulator-software.htm


From: Jiaoyan Li 
Sent: Tuesday, November 27, 2018 4:50 PM
To: Klaij, Christiaan; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PetscPartitioner is missing for fortran

Thanks, Chris, for your kind reply. Yes, I believe the mailing list is the most 
convenient way for Petsc's to communicate with each other.

Also, I am wondering if you may happen to use "PetscPartitioner" before? Have 
you ever tried to fix Petsc-Fortran problem by yourself before? I believe Petsc 
needs to build an interface for Fortran from the C source code. But, I don't 
know where Petsc does that. I am just trying to see if I can fix it by myself. 
Thank you.

Jiaoyan


On 11/27/18, 00:44, "petsc-users on behalf of Klaij, Christiaan via 
petsc-users"  wrote:

Hi Jiaoyan,

I've been using the fortran interface since 2004. During these
years I only found a handful of things missing. After an email to
this list the developers are usually quite fast in fixing the
problem.

Chris


Date: Mon, 26 Nov 2018 16:41:29 +
From: Jiaoyan Li 
To: "petsc-users@mcs.anl.gov" 
Subject: [petsc-users] PetscPartitioner is missing for fortran
Message-ID: 
Content-Type: text/plain; charset="utf-8"

Dear Petsc Users:

I am developing a Fortran code which uses Petsc APIs. But, seems the 
interface between Fortran and Petsc is not completed, as replied by Barry. Is 
there anyone may have some experience on building the Fortran interface for 
Petsc? Any suggestions or comments are highly appreciated. Thank you.

Have a nice day,

Jiaoyan




On 11/21/18, 19:25, "Smith, Barry F." 
mailto:bsm...@mcs.anl.gov>> wrote:


   Matt,

 PetscPartitioner is missing from lib/petsc/conf/bfort-petsc.txt

   Barry



> On Nov 21, 2018, at 3:33 PM, Jiaoyan Li via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Dear Petsc users:
>
> I am trying to use Petsc APIs for Fortran. One problem that I am 
facing right now is about the PetscPartitioner, i.e.,
>
> #include ?petsc/finclude/petsc.h?
>   use petscdmplex
>
>   PetscPartitioner :: part
>   PetscErrorCode :: ierr
>
>   Call PetscPartitionerCreate(PETSC_COMM_WORLD, part, ierr)
>
> But, I got the error message as follows:
>
> [0]PETSC ERROR: - Error Message 
--
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Pointer: Parameter # 2
> [0]PETSC ERROR: See 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_documentation_faq.html=DwIGaQ=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00=5MMpjBrVOPpVGfIH9op1r4nz1k4YC8LDRnpo_HwMgZU=1lomdbavAvxQQpe-IZtEv3xEovYeZ9lxbOzN-sE8CUQ=BYCTTDqEflIdLwRAkF4txknqLg0jeyOcdodQkfHj-TA=
 for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> [0]PETSC ERROR: ./htm3d on a arch-linux2-c-opt named fn607018 by LIJ 
Wed Nov 21 16:30:35 2018
> [0]PETSC ERROR: Configure options --download-fblaslapack 
--with-mpi-dir=/opt/mpitch-3.2.1 -download-exodusii --download-hdf5 
--download-netcdf --download-zlib --download-pnetcdf
> [0]PETSC ERROR: #1 PetscPartitionerCreate() line 601 in 
/home/lij/packages/petsc/src/dm/impls/plex/plexpartition.c
> [1]PETSC ERROR: - Error Message 
--
> [1]PETSC ERROR: Null argument, when expecting valid pointer
> [1]PETSC ERROR: Null Pointer: Parameter # 2
> [1]PETSC ERROR: See 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_documentation_faq.html=DwIGaQ=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00=5MMpjBrVOPpVGfIH9op1r4nz1k4YC8LDRnpo_HwMgZU=1lomdbavAvxQQpe-IZtEv3xEovYeZ9lxbOzN-sE8CUQ=BYCTTDqEflIdLwRAkF4txknqLg0jeyOcdodQkfHj-TA=
 for trouble shooting.
> [1]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> [1]PETSC ERROR: ./htm3d on a arch-linux2-c-opt named fn607018 by LIJ 
Wed Nov 21 16:30:35 2018
> [1]PETSC ERROR: Conf

Re: [petsc-users] PetscPartitioner is missing for fortran

2018-11-26 Thread Klaij, Christiaan via petsc-users
Hi Jiaoyan,

I've been using the fortran interface since 2004. During these
years I only found a handful of things missing. After an email to
this list the developers are usually quite fast in fixing the
problem.

Chris


Date: Mon, 26 Nov 2018 16:41:29 +
From: Jiaoyan Li 
To: "petsc-users@mcs.anl.gov" 
Subject: [petsc-users] PetscPartitioner is missing for fortran
Message-ID: 
Content-Type: text/plain; charset="utf-8"

Dear Petsc Users:

I am developing a Fortran code which uses Petsc APIs. But, seems the interface 
between Fortran and Petsc is not completed, as replied by Barry. Is there 
anyone may have some experience on building the Fortran interface for Petsc? 
Any suggestions or comments are highly appreciated. Thank you.

Have a nice day,

Jiaoyan




On 11/21/18, 19:25, "Smith, Barry F." 
mailto:bsm...@mcs.anl.gov>> wrote:


   Matt,

 PetscPartitioner is missing from lib/petsc/conf/bfort-petsc.txt

   Barry



> On Nov 21, 2018, at 3:33 PM, Jiaoyan Li via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Dear Petsc users:
>
> I am trying to use Petsc APIs for Fortran. One problem that I am facing 
right now is about the PetscPartitioner, i.e.,
>
> #include ?petsc/finclude/petsc.h?
>   use petscdmplex
>
>   PetscPartitioner :: part
>   PetscErrorCode :: ierr
>
>   Call PetscPartitionerCreate(PETSC_COMM_WORLD, part, ierr)
>
> But, I got the error message as follows:
>
> [0]PETSC ERROR: - Error Message 
--
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Pointer: Parameter # 2
> [0]PETSC ERROR: See 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_documentation_faq.html=DwIGaQ=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00=5MMpjBrVOPpVGfIH9op1r4nz1k4YC8LDRnpo_HwMgZU=1lomdbavAvxQQpe-IZtEv3xEovYeZ9lxbOzN-sE8CUQ=BYCTTDqEflIdLwRAkF4txknqLg0jeyOcdodQkfHj-TA=
 for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> [0]PETSC ERROR: ./htm3d on a arch-linux2-c-opt named fn607018 by LIJ Wed 
Nov 21 16:30:35 2018
> [0]PETSC ERROR: Configure options --download-fblaslapack 
--with-mpi-dir=/opt/mpitch-3.2.1 -download-exodusii --download-hdf5 
--download-netcdf --download-zlib --download-pnetcdf
> [0]PETSC ERROR: #1 PetscPartitionerCreate() line 601 in 
/home/lij/packages/petsc/src/dm/impls/plex/plexpartition.c
> [1]PETSC ERROR: - Error Message 
--
> [1]PETSC ERROR: Null argument, when expecting valid pointer
> [1]PETSC ERROR: Null Pointer: Parameter # 2
> [1]PETSC ERROR: See 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_documentation_faq.html=DwIGaQ=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00=5MMpjBrVOPpVGfIH9op1r4nz1k4YC8LDRnpo_HwMgZU=1lomdbavAvxQQpe-IZtEv3xEovYeZ9lxbOzN-sE8CUQ=BYCTTDqEflIdLwRAkF4txknqLg0jeyOcdodQkfHj-TA=
 for trouble shooting.
> [1]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> [1]PETSC ERROR: ./htm3d on a arch-linux2-c-opt named fn607018 by LIJ Wed 
Nov 21 16:30:35 2018
> [1]PETSC ERROR: Configure options --download-fblaslapack 
--with-mpi-dir=/opt/mpitch-3.2.1 -download-exodusii --download-hdf5 
--download-netcdf --download-zlib --download-pnetcdf
> [1]PETSC ERROR: #1 PetscPartitionerCreate() line 601 in 
/home/lij/packages/petsc/src/dm/impls/plex/plexpartition.c
> [2]PETSC ERROR: - Error Message 
--
> [2]PETSC ERROR: Null argument, when expecting valid pointer
> [2]PETSC ERROR: Null Pointer: Parameter # 2
> [2]PETSC ERROR: See 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_documentation_faq.html=DwIGaQ=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00=5MMpjBrVOPpVGfIH9op1r4nz1k4YC8LDRnpo_HwMgZU=1lomdbavAvxQQpe-IZtEv3xEovYeZ9lxbOzN-sE8CUQ=BYCTTDqEflIdLwRAkF4txknqLg0jeyOcdodQkfHj-TA=
 for trouble shooting.
> [2]PETSC ERROR: Petsc Release Version 3.10.2, unknown
> [2]PETSC ERROR: ./htm3d on a arch-linux2-c-opt named fn607018 by LIJ Wed 
Nov 21 16:30:35 2018
> [2]PETSC ERROR: Configure options --download-fblaslapack 
--with-mpi-dir=/opt/mpitch-3.2.1 -download-exodusii --download-hdf5 
--download-netcdf --download-zlib --download-pnetcdf
> [2]PETSC ERROR: #1 PetscPartitionerCreate() line 601 in 
/home/lij/packages/petsc/src/dm/impls/plex/plexpartition.c
> [3]PETSC ERROR: - Error Message 
--
> [3]PETSC ERROR: Null argument, when expecting valid pointer
> [3]PETSC ERROR: Null Pointer: Parameter # 2
> [3]PETSC ERROR: See 

Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-22 Thread Klaij, Christiaan
Thanks Barry and Matt, it makes sense if rr is a pointer instead
of an allocatable. So:

  Vec, POINTER, INTENT(in) :: rr_system

would be the proper way, right?

And out of curiosity, why did petsc-3.8.4 tolerate my wrong
INTENT(out)?

Chris



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Seminar-Scheepsbrandstof-en-de-mondiale-zwavelnorm-2020.htm


From: Smith, Barry F. 
Sent: Friday, October 19, 2018 10:26 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] fortran INTENT with petsc object gives segfault 
after upgrade from 3.8.4 to 3.10.2

> On Oct 19, 2018, at 9:37 AM, Klaij, Christiaan  wrote:
>
> As far as I (mis)understand fortran, this is a data protection
> thing: all arguments are passed in from above but the subroutine
> is only allowed to change rr and ierr, not aa and xx (if you try,
> you get a compiler warning).

  The routine is not allowed to change rr (it is only allowed to change the 
values "inside" rr) that is why it needs to be intent in or inout. Otherwise 
the compiler can optimize and not pass down the value of the rr pointer to the 
subroutine since by declaring as it as out the compiler thinks your subroutine 
is going to set is value.

Barry


> That's why I find it very odd to
> give an intent(in) to rr. But I've tried your suggestion anyway:
> both intent(in) and intent(inout) for rr do work! Can't say I
> understand though.
>
> Below's a small example of what I was expecting. Change rr to
> intent(in) and the compiler complains.
>
> Chris
>
> $ cat intent.f90
> program intent
>
>  implicit none
>
>  real, allocatable :: aa(:), xx(:), rr(:)
>  integer :: ierr
>
>  allocate(aa(10),xx(10),rr(10))
>
>  aa = 1.0
>  xx = 2.0
>
>  call matmult(aa,xx,rr,ierr)
>
>  print *, rr(1)
>  print *, ierr
>
>  deallocate(aa,xx,rr)
>
>  contains
>
>subroutine matmult(aa,xx,rr,ierr)
>  real, intent(in) :: aa(:), xx(:)
>  real, intent(out):: rr(:)
>  integer, intent(out) :: ierr
>  rr=aa*xx
>  ierr=0
>end subroutine matmult
>
> end program intent
> $ ./a.out
>   2.00
>   0
>
>
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Seminar-Scheepsbrandstof-en-de-mondiale-zwavelnorm-2020.htm
>
> 
> From: Smith, Barry F. 
> Sent: Friday, October 19, 2018 2:32 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] fortran INTENT with petsc object gives segfault 
> after upgrade from 3.8.4 to 3.10.2
>
>   Hmm, the intent of all three first arguments should be in since they are 
> passed in from the routine above. Does it work if you replace
>
>> Vec, INTENT(out) :: rr_system
>
> with
>
>> Vec, INTENT(in) :: rr_system
>
> ?
>
>Barry
>
>
>> On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan  wrote:
>>
>> I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
>> surprised to find a number of segmentation faults in my test
>> cases. These turned out to be related to user-defined MatMult and
>> PCApply for shell matrices. For example:
>>
>> SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
>> Mat, INTENT(in) :: aa_system
>> Vec, INTENT(in) :: xx_system
>> Vec, INTENT(out) :: rr_system
>> PetscErrorCode, INTENT(out) :: ierr
>> ...
>> END
>>
>> where aa_system is the shell matrix. This code works fine with
>> 3.8.4 and older, but fails with 3.10.2 due to invalid
>> pointers (gdb backtrace shows failure of VecSetValues due to
>> invalid first argument). After replacing by:
>>
>> SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
>> Mat :: aa_system
>> Vec :: xx_system
>> Vec :: rr_system
>> PetscErrorCode :: ierr
>> ...
>> END
>>
>> everything's fine again. So clearly something has changed since
>> 3.8.4 that now prevents the use of INTENT in Fortran (at least
>> using intel 17.0.1 compilers). Any reason for this?
>>
>> Chris
>>
>>
>> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
>> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>>
>> MARIN news: 
>> http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm
>>
>



Re: [petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Klaij, Christiaan
As far as I (mis)understand fortran, this is a data protection
thing: all arguments are passed in from above but the subroutine
is only allowed to change rr and ierr, not aa and xx (if you try,
you get a compiler warning). That's why I find it very odd to
give an intent(in) to rr. But I've tried your suggestion anyway:
both intent(in) and intent(inout) for rr do work! Can't say I
understand though.

Below's a small example of what I was expecting. Change rr to
intent(in) and the compiler complains.

Chris

$ cat intent.f90
program intent

  implicit none

  real, allocatable :: aa(:), xx(:), rr(:)
  integer :: ierr

  allocate(aa(10),xx(10),rr(10))

  aa = 1.0
  xx = 2.0

  call matmult(aa,xx,rr,ierr)

  print *, rr(1)
  print *, ierr

  deallocate(aa,xx,rr)

  contains

subroutine matmult(aa,xx,rr,ierr)
  real, intent(in) :: aa(:), xx(:)
  real, intent(out):: rr(:)
  integer, intent(out) :: ierr
  rr=aa*xx
  ierr=0
end subroutine matmult

end program intent
$ ./a.out
   2.00
   0




dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Seminar-Scheepsbrandstof-en-de-mondiale-zwavelnorm-2020.htm


From: Smith, Barry F. 
Sent: Friday, October 19, 2018 2:32 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] fortran INTENT with petsc object gives segfault 
after upgrade from 3.8.4 to 3.10.2

   Hmm, the intent of all three first arguments should be in since they are 
passed in from the routine above. Does it work if you replace

>  Vec, INTENT(out) :: rr_system

with

>  Vec, INTENT(in) :: rr_system

?

Barry


> On Oct 19, 2018, at 3:51 AM, Klaij, Christiaan  wrote:
>
> I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
> surprised to find a number of segmentation faults in my test
> cases. These turned out to be related to user-defined MatMult and
> PCApply for shell matrices. For example:
>
> SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
>  Mat, INTENT(in) :: aa_system
>  Vec, INTENT(in) :: xx_system
>  Vec, INTENT(out) :: rr_system
>  PetscErrorCode, INTENT(out) :: ierr
>  ...
> END
>
> where aa_system is the shell matrix. This code works fine with
> 3.8.4 and older, but fails with 3.10.2 due to invalid
> pointers (gdb backtrace shows failure of VecSetValues due to
> invalid first argument). After replacing by:
>
> SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
>  Mat :: aa_system
>  Vec :: xx_system
>  Vec :: rr_system
>  PetscErrorCode :: ierr
>  ...
> END
>
> everything's fine again. So clearly something has changed since
> 3.8.4 that now prevents the use of INTENT in Fortran (at least
> using intel 17.0.1 compilers). Any reason for this?
>
> Chris
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm
>



[petsc-users] fortran INTENT with petsc object gives segfault after upgrade from 3.8.4 to 3.10.2

2018-10-19 Thread Klaij, Christiaan
I've recently upgraded from petsc-3.8.4 to petsc-3.10.2 and was
surprised to find a number of segmentation faults in my test
cases. These turned out to be related to user-defined MatMult and
PCApply for shell matrices. For example:

SUBROUTINE systemMatMult(aa_system,xx_system,rr_system,ierr)
  Mat, INTENT(in) :: aa_system
  Vec, INTENT(in) :: xx_system
  Vec, INTENT(out) :: rr_system
  PetscErrorCode, INTENT(out) :: ierr
  ...
END

where aa_system is the shell matrix. This code works fine with
3.8.4 and older, but fails with 3.10.2 due to invalid
pointers (gdb backtrace shows failure of VecSetValues due to
invalid first argument). After replacing by:

SUBROUTINE mass_momentum_systemMatMult(aa_system,xx_system,rr_system,ierr)
  Mat :: aa_system
  Vec :: xx_system
  Vec :: rr_system
  PetscErrorCode :: ierr
  ...
END

everything's fine again. So clearly something has changed since
3.8.4 that now prevents the use of INTENT in Fortran (at least
using intel 17.0.1 compilers). Any reason for this?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/ReFRESCO-successfully-coupled-to-ParaView-Catalyst-for-insitu-analysis-1.htm



Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2018-05-15 Thread Klaij, Christiaan
Matt,


Just a reminder. With petsc-3.8.4 the issue is still there.


Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: MARIN at WindDays 2018, June 13 & 14, 
Rotterdam<http://www.marin.nl/web/News/News-items/MARIN-at-WindDays-2018-June-13-14-Rotterdam.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Wednesday, January 18, 2017 4:13 PM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On Wed, Jan 18, 2017 at 4:42 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Thanks Lawrence, that nicely explains the unexpected behaviour!

I guess in general there ought to be getters for the four
ksp(A00)'s that occur in the full factorization.

Yes, we will fix it. I think that the default retrieval should get the 00 
block, not the inner as well.

   Matt


Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044> | 
mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm


From: Lawrence Mitchell 
<lawrence.mitch...@imperial.ac.uk<mailto:lawrence.mitch...@imperial.ac.uk>>
Sent: Wednesday, January 18, 2017 10:59 AM
To: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Cc: bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>; Klaij, Christiaan
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On 18/01/17 08:40, Klaij, Christiaan wrote:
> Barry,
>
> I've managed to replicate the problem with 3.7.4
> snes/examples/tutorials/ex70.c. Basically I've added
> KSPGetTotalIterations to main (file is attached):

PCFieldSplitGetSubKSP returns, in the Schur case:

MatSchurComplementGet(pc->schur, );

in subksp[0]

and

pc->schur in subksp[1]

In your case, subksp[0] is the (preonly) approximation to A^{-1} *inside*

S = D - C A_inner^{-1} B

And subksp[1] is the approximation to S^{-1}.

Since each application of S to a vector (required in S^{-1}) requires
one application of A^{-1}, because you use 225 iterations in total to
invert S, you also use 225 applications of the KSP on A_inner.

There doesn't appear to be a way to get the KSP used for A^{-1} if
you've asked for different approximations to A^{-1} in the 0,0 block
and inside S.

Cheers,

Lawrence

> $ diff -u ex70.c.bak ex70.c
> --- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
> +++ ex70.c2017-01-18 09:03:40.904483434 +0100
> @@ -669,6 +669,10 @@
>KSPksp;
>PetscErrorCode ierr;
>
> +  KSP*subksp;
> +  PC pc;
> +  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
> +
>ierr = PetscInitialize(, , NULL, help);CHKERRQ(ierr);
>s.nx = 4;
>s.ny = 6;
> @@ -690,6 +694,13 @@
>ierr = StokesSetupPC(, ksp);CHKERRQ(ierr);
>ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);
>
> +  ierr = KSPGetPC(ksp, );CHKERRQ(ierr);
> +  ierr = PCFieldSplitGetSubKSP(pc,,); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[0],_vv); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[1],_pp); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", 
> nusediter_vv); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", 
> nusediter_pp); CHKERRQ(ierr);
> +
>/* don't trust, verify! */
>ierr = StokesCalcResidual();CHKERRQ(ierr);
>ierr = StokesCalcError();CHKERRQ(ierr);
>
> Now run as follows:
>
> $ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type 
> schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres 
> -fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi 
> -fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi 
> -fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi 
> -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONV

[petsc-users] using PETSC_NULL_INTEGER in preallocation routines

2018-05-14 Thread Klaij, Christiaan
With petsc-3.7.5, I had F90 code like this:

  CALL MatSeqAIJSetPreallocation(aa_symmetric,PETSC_NULL_INTEGER,d_nnz,ierr); 
CHKERRQ(ierr)
  CALL 
MatMPIAIJSetPreallocation(aa_symmetric,PETSC_NULL_INTEGER,d_nnz,PETSC_NULL_INTEGER,o_nnz,ierr);
 CHKERRQ(ierr)

which worked fine. Now, with petsc-3.8.4, the same code gives this compilation 
error:

  error #6634: The shape matching rules of actual arguments and dummy arguments 
have been violated.   [PETSC_NULL_INTEGER]
  CALL MatSeqAIJSetPreallocation(aa_symmetric,PETSC_NULL_INTEGER,d_nnz,ierr); 
if (ierr .ne. 0) then ; call PetscErrorF(ierr); return; endif
--^
  error #6634: The shape matching rules of actual arguments and dummy arguments 
have been violated.   [PETSC_NULL_INTEGER]
  CALL 
MatMPIAIJSetPreallocation(aa_symmetric,PETSC_NULL_INTEGER,d_nnz,PETSC_NULL_INTEGER,o_nnz,ierr);
 if (ierr .ne. 0) then ; call PetscErrorF(ierr); return; endif
--^

What's the intended usage now, simply 0 instead of PETSC_NULL_INTEGER?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/120-papers-presented-at-NAV2018.htm



Re: [petsc-users] --download-hdf5 and then compiling cgns

2018-04-05 Thread Klaij, Christiaan
That would be nice. The minimum for me would be

./configure \
--prefix=/path/to/myprefix \
--with-hdf5=/path/to/petsc/hdf5 \
--with-fortran=yes

but I sure like to have their tools as well

./configure \
--prefix=/path/to/myprefix \
--with-hdf5=/path/to/petsc/hdf5 \
--with-fortran=yes \
--enable-cgnstools \
--datarootdir=/path/to/myprefix/share

This is more tricky as it requires tcl and tk libs. Also
datarootdir is supposed to be PREFIX/share by default, but
there's a bug so I had to specify it explicitly.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Kom-zaterdag-10-november-naar-de-open-dag-in-Wageningen.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, April 05, 2018 3:31 PM
To: Klaij, Christiaan
Cc: Gaetan Kenway; petsc-users
Subject: Re: [petsc-users] --download-hdf5 and then compiling cgns

Glad it works..

Perhaps its is simple fix to add in --download-cgns option to petsc configure..
[I see its a direct dependency from PETSc sources - 
src/dm/impls/plex/plexcgns.c ]

Satish


On Thu, 5 Apr 2018, Klaij, Christiaan wrote:

> Satish, Gaetan, thanks for your suggestions. I've decided not to
> use cmake for cgns, and that works fine with petsc's build of
> hdf5.
>
>
> Chris
>
> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
> www.marin.nl<http://www.marin.nl>
>
> [LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
> <http://www.youtube.com/marinmultimedia>  [Twitter] 
> <https://twitter.com/MARIN_nieuws>  [Facebook] 
> <https://www.facebook.com/marin.wageningen>
> MARIN news: Kom zaterdag 10 november naar de open dag in 
> Wageningen!<http://www.marin.nl/web/News/News-items/Kom-zaterdag-10-november-naar-de-open-dag-in-Wageningen.htm>
>
> ____
> From: Gaetan Kenway <gaet...@gmail.com>
> Sent: Thursday, March 29, 2018 6:43 PM
> To: petsc-users
> Cc: Klaij, Christiaan
> Subject: Re: [petsc-users] --download-hdf5 and then compiling cgns
>
> I have compiled CGNS3.3 with hdf from PETSc but not using cmake from CGNS. 
> Here is what I used:
>
> git clone  https://github.com/CGNS/CGNS.git
>
> cd CGNS/src
> export CC=mpicc
> export FC=mpif90
> export FCFLAGS='-fPIC -O3'
> export CFLAGS='-fPIC -O3'
> export LIBS='-lz -ldl'
>
> ./configure --with-hdf5=$PETSC_DIR/$PETSC_ARCH --enable-64bit 
> --prefix=/u/wk/gkenway/.local  --enable-parallel --enable-cgnstools=yes 
> --enable-shared
> make -j 10
>
> Hope that helps
> Gaetan
>
> On Thu, Mar 29, 2018 at 9:37 AM, Satish Balay 
> <ba...@mcs.anl.gov<mailto:ba...@mcs.anl.gov>> wrote:
> --download-hdf5 uses 'autoconf' build of hdf5 - and not cmake
> build. [hdf5 provides both build tools].
>
> This must be the reason why hdf5Config.cmake and hdf5-config.cmake
> files might be missing.
>
> You might be able to install hdf5 manually with cmake - and then build
> cgns-3.3.1 with it.
>
> Satish
>
>
> On Thu, 29 Mar 2018, Klaij, Christiaan wrote:
>
> > Satish,
> >
> > I've build petsc-3.8.3 with --download-hdf5 and this works
> > fine. Now, I'm trying to compile cgns-3.3.1 and have it use the
> > hdf5 library from petsc. Therefore, in the cmake configure I have
> > set this option
> >
> > -D HDF5_DIR=/path/to/petsc/hdf5
> >
> > but cmake gives this message:
> >
> > CMake Warning at CMakeLists.txt:207 (find_package):
> >   Could not find a package configuration file provided by "HDF5" with any of
> >   the following names:
> >
> > hdf5Config.cmake
> > hdf5-config.cmake
> >
> > and then falls back to the system's hdf5. Indeed there are no
> > such files in /path/to/petsc/hdf5. Any idea on how to proceed
> > from here?
> >
> > Chris
> >
> >
> > dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> > MARIN | T +31 317 49 33 44 | 
> > mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl
> >
> > MARIN news: 
> > http://www.marin.nl/web/News/News-items/Comfort-draft-for-semisubmersible-yachts.htm
> >
> >
>
>
>
>
>



Re: [petsc-users] --download-hdf5 and then compiling cgns

2018-04-05 Thread Klaij, Christiaan
Satish, Gaetan, thanks for your suggestions. I've decided not to
use cmake for cgns, and that works fine with petsc's build of
hdf5.


Chris

dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Kom zaterdag 10 november naar de open dag in 
Wageningen!<http://www.marin.nl/web/News/News-items/Kom-zaterdag-10-november-naar-de-open-dag-in-Wageningen.htm>


From: Gaetan Kenway <gaet...@gmail.com>
Sent: Thursday, March 29, 2018 6:43 PM
To: petsc-users
Cc: Klaij, Christiaan
Subject: Re: [petsc-users] --download-hdf5 and then compiling cgns

I have compiled CGNS3.3 with hdf from PETSc but not using cmake from CGNS. Here 
is what I used:

git clone  https://github.com/CGNS/CGNS.git

cd CGNS/src
export CC=mpicc
export FC=mpif90
export FCFLAGS='-fPIC -O3'
export CFLAGS='-fPIC -O3'
export LIBS='-lz -ldl'

./configure --with-hdf5=$PETSC_DIR/$PETSC_ARCH --enable-64bit 
--prefix=/u/wk/gkenway/.local  --enable-parallel --enable-cgnstools=yes 
--enable-shared
make -j 10

Hope that helps
Gaetan

On Thu, Mar 29, 2018 at 9:37 AM, Satish Balay 
<ba...@mcs.anl.gov<mailto:ba...@mcs.anl.gov>> wrote:
--download-hdf5 uses 'autoconf' build of hdf5 - and not cmake
build. [hdf5 provides both build tools].

This must be the reason why hdf5Config.cmake and hdf5-config.cmake
files might be missing.

You might be able to install hdf5 manually with cmake - and then build
cgns-3.3.1 with it.

Satish


On Thu, 29 Mar 2018, Klaij, Christiaan wrote:

> Satish,
>
> I've build petsc-3.8.3 with --download-hdf5 and this works
> fine. Now, I'm trying to compile cgns-3.3.1 and have it use the
> hdf5 library from petsc. Therefore, in the cmake configure I have
> set this option
>
> -D HDF5_DIR=/path/to/petsc/hdf5
>
> but cmake gives this message:
>
> CMake Warning at CMakeLists.txt:207 (find_package):
>   Could not find a package configuration file provided by "HDF5" with any of
>   the following names:
>
> hdf5Config.cmake
> hdf5-config.cmake
>
> and then falls back to the system's hdf5. Indeed there are no
> such files in /path/to/petsc/hdf5. Any idea on how to proceed
> from here?
>
> Chris
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> 
> | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Comfort-draft-for-semisubmersible-yachts.htm
>
>






[petsc-users] --download-hdf5 and then compiling cgns

2018-03-29 Thread Klaij, Christiaan
Satish,

I've build petsc-3.8.3 with --download-hdf5 and this works
fine. Now, I'm trying to compile cgns-3.3.1 and have it use the
hdf5 library from petsc. Therefore, in the cmake configure I have
set this option

-D HDF5_DIR=/path/to/petsc/hdf5

but cmake gives this message:

CMake Warning at CMakeLists.txt:207 (find_package):
  Could not find a package configuration file provided by "HDF5" with any of
  the following names:

hdf5Config.cmake
hdf5-config.cmake

and then falls back to the system's hdf5. Indeed there are no
such files in /path/to/petsc/hdf5. Any idea on how to proceed
from here?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Comfort-draft-for-semisubmersible-yachts.htm



Re: [petsc-users] problem configuring 3.8.3 with intel mpi

2018-03-22 Thread Klaij, Christiaan
Well, that's a small burden.

By the way, the option --with-cc=mpicc together with export
I_MPI_CC=icc doesn't work together with --download-hdf5, while it
works --with-cc=mpiicc and no export. Guess the export doesn't
make it into the hdf5 configure.

Any plans for adding --download-cgns and have it work with HDF5
and fortran enabled? That would ease my burden.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Frequency-of-spill-model-for-area-risk-assessment-of-shipsource-oil-spills-in-Canadian-waters-1.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, March 22, 2018 4:03 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

Should mention:

one nice thing about --with-mpi-dir option is - it attempts to pick up
related/compatible compilers - i.e MPI_DIR/bin/{mpicc,mpicxx,mpif90}

When one uses --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx - the
burden on providing related/compatible compilers is on the user - i.e
more error cases with configure using unrelated/incompatible mpi
compilers..

Will try to add support for
--with-mpi-dir=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi
- and then prefer mpiicc over mpicc [in both with-mpi-dir check and
PATH check?]


Satish

On Thu, 22 Mar 2018, Klaij, Christiaan wrote:

> Guess I shouldn't have read the intel manual :-)
>
> Lesson learned, no more guessing. And I sure appreciate all the
> effort you are putting into this, especially the option to
> configure and build other packages in a consistent way with the
> --download-package option.
>
> Chris
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/SSSRIMARIN-seminar-May-18-2018-Shanghai.htm
>
> 
> From: Satish Balay <ba...@mcs.anl.gov>
> Sent: Thursday, March 22, 2018 3:14 PM
> To: Klaij, Christiaan
> Cc: petsc-users
> Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi
>
> Sure - they have so many ways of tweaking things - so one can't things
> will be the same across multiple users.
>
> Most users dont' set these env variables [thats extra work - and not
> the default behavior anyway] - so they rely on the default behavior -
> i.e use mpicc for gnu and mpiicc for intel compilers.
>
> [i.e the guesses picked up by configure might not always be what the
> user wants - so its best to tell configure not to guess - and tell it
> exactly what one wants it to use. And there might be scope to improve
> guessing in favor of most common use cases.]
>
> Satish
>
> On Thu, 22 Mar 2018, Klaij, Christiaan wrote:
>
> > Fair enough.
> >
> > As a side note: if you want intel compilers, there's no need to
> > specify mpiicc, intel says to export I_MPI_CC=icc, which gives
> >
> > $ which mpicc
> > /opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc
> > $ mpicc -v
> > mpiicc for the Intel(R) MPI Library 2017 Update 1 for Linux*
> > Copyright(C) 2003-2016, Intel Corporation.  All rights reserved.
> > icc version 17.0.1 (gcc version 4.8.5 compatibility)
> >
> > (otherwise they wrap the gnu compilers by default, baffling)
> >
> > Chris
> >
> >
> > dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> > MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> >
> > MARIN news: 
> > http://www.marin.nl/web/News/News-items/Frequency-of-spill-model-for-area-risk-assessment-of-shipsource-oil-spills-in-Canadian-waters-1.htm
> >
> > ____
> > From: Satish Balay <ba...@mcs.anl.gov>
> > Sent: Thursday, March 22, 2018 2:54 PM
> > To: Klaij, Christiaan
> > Cc: petsc-users
> > Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi
> >
> > On Thu, 22 Mar 2018, Klaij, Christiaan wrote:
> >
> > > OK, configure works with the option --with-cc=/path/to/mpicc,
> > > thanks!
> > >
> > > I'm not sure you are right about checking --with-mpi-dir/bin,
> >
> > There are 2 types of options to configure:
> >  - tell configure what to use - i.e do not guess [like --with-cc=mpicc']
> >  - tell configure to *guess* with partial info [like 
> > --with-package-dir=/path/to/foo/bar]
> >
> > When configure has to guess - it will never be perfect.
&g

Re: [petsc-users] problem configuring 3.8.3 with intel mpi

2018-03-22 Thread Klaij, Christiaan
Guess I shouldn't have read the intel manual :-)

Lesson learned, no more guessing. And I sure appreciate all the
effort you are putting into this, especially the option to
configure and build other packages in a consistent way with the
--download-package option.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/SSSRIMARIN-seminar-May-18-2018-Shanghai.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, March 22, 2018 3:14 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

Sure - they have so many ways of tweaking things - so one can't things
will be the same across multiple users.

Most users dont' set these env variables [thats extra work - and not
the default behavior anyway] - so they rely on the default behavior -
i.e use mpicc for gnu and mpiicc for intel compilers.

[i.e the guesses picked up by configure might not always be what the
user wants - so its best to tell configure not to guess - and tell it
exactly what one wants it to use. And there might be scope to improve
guessing in favor of most common use cases.]

Satish

On Thu, 22 Mar 2018, Klaij, Christiaan wrote:

> Fair enough.
>
> As a side note: if you want intel compilers, there's no need to
> specify mpiicc, intel says to export I_MPI_CC=icc, which gives
>
> $ which mpicc
> /opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc
> $ mpicc -v
> mpiicc for the Intel(R) MPI Library 2017 Update 1 for Linux*
> Copyright(C) 2003-2016, Intel Corporation.  All rights reserved.
> icc version 17.0.1 (gcc version 4.8.5 compatibility)
>
> (otherwise they wrap the gnu compilers by default, baffling)
>
> Chris
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Frequency-of-spill-model-for-area-risk-assessment-of-shipsource-oil-spills-in-Canadian-waters-1.htm
>
> 
> From: Satish Balay <ba...@mcs.anl.gov>
> Sent: Thursday, March 22, 2018 2:54 PM
> To: Klaij, Christiaan
> Cc: petsc-users
> Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi
>
> On Thu, 22 Mar 2018, Klaij, Christiaan wrote:
>
> > OK, configure works with the option --with-cc=/path/to/mpicc,
> > thanks!
> >
> > I'm not sure you are right about checking --with-mpi-dir/bin,
>
> There are 2 types of options to configure:
>  - tell configure what to use - i.e do not guess [like --with-cc=mpicc']
>  - tell configure to *guess* with partial info [like 
> --with-package-dir=/path/to/foo/bar]
>
> When configure has to guess - it will never be perfect.
>
> Sure - perhaps the guessing of --with-mpi-dir can be improved - but it will 
> never be perfect.
>
> > because, at least with this install, the binaries are located in
> > --with-mpi-dir/intel64/bin:
> >
> > $ which mpicc
> > /opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc
>
> So the more appropriate thing here [wrt petsc configure] is
> --with-mpi-dir=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64
>
> And then - what if you want to use intel compilers and not gnu
> compilers [with intel mpi]? - you have to specify mpiicc - and not
> mpicc - so its best to prefer 'no-guess' options like --with-cc.
> [with-mpi-dir is usually convinent - but not always suitable]
>
> Satish
>
> >
> > Perhaps something to do with intel supporting both 32 and
> > 64bit. Anyway, our sysadmin didn't change any of these locations,
> > they just run the intel installer and this is what we got.
>



Re: [petsc-users] problem configuring 3.8.3 with intel mpi

2018-03-22 Thread Klaij, Christiaan
Fair enough.

As a side note: if you want intel compilers, there's no need to
specify mpiicc, intel says to export I_MPI_CC=icc, which gives

$ which mpicc
/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc
$ mpicc -v
mpiicc for the Intel(R) MPI Library 2017 Update 1 for Linux*
Copyright(C) 2003-2016, Intel Corporation.  All rights reserved.
icc version 17.0.1 (gcc version 4.8.5 compatibility)

(otherwise they wrap the gnu compilers by default, baffling)

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Frequency-of-spill-model-for-area-risk-assessment-of-shipsource-oil-spills-in-Canadian-waters-1.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, March 22, 2018 2:54 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

On Thu, 22 Mar 2018, Klaij, Christiaan wrote:

> OK, configure works with the option --with-cc=/path/to/mpicc,
> thanks!
>
> I'm not sure you are right about checking --with-mpi-dir/bin,

There are 2 types of options to configure:
 - tell configure what to use - i.e do not guess [like --with-cc=mpicc']
 - tell configure to *guess* with partial info [like 
--with-package-dir=/path/to/foo/bar]

When configure has to guess - it will never be perfect.

Sure - perhaps the guessing of --with-mpi-dir can be improved - but it will 
never be perfect.

> because, at least with this install, the binaries are located in
> --with-mpi-dir/intel64/bin:
>
> $ which mpicc
> /opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc

So the more appropriate thing here [wrt petsc configure] is
--with-mpi-dir=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64

And then - what if you want to use intel compilers and not gnu
compilers [with intel mpi]? - you have to specify mpiicc - and not
mpicc - so its best to prefer 'no-guess' options like --with-cc.
[with-mpi-dir is usually convinent - but not always suitable]

Satish

>
> Perhaps something to do with intel supporting both 32 and
> 64bit. Anyway, our sysadmin didn't change any of these locations,
> they just run the intel installer and this is what we got.


Re: [petsc-users] problem configuring 3.8.3 with intel mpi

2018-03-22 Thread Klaij, Christiaan
There's a bin64 symlink:

$ ls -l /opt/intel/compilers_and_libraries_2017.1.132/linux/mpi
total 0
drwxr-xr-x. 3 root root 16 Mar 28  2017 benchmarks
lrwxrwxrwx. 1 root root 11 Mar 28  2017 bin64 -> intel64/bin
drwxr-xr-x. 2 root root 41 Mar 28  2017 binding
lrwxrwxrwx. 1 root root 11 Mar 28  2017 etc64 -> intel64/etc
lrwxrwxrwx. 1 root root 15 Mar 28  2017 include64 -> intel64/include
drwxr-xr-x. 6 root root 50 Mar 28  2017 intel64
lrwxrwxrwx. 1 root root 11 Mar 28  2017 lib64 -> intel64/lib
drwxr-xr-x. 4 root root 28 Mar 28  2017 man
drwxr-xr-x. 6 root root 50 Mar 28  2017 mic
drwxr-xr-x. 2 root root 78 Mar 28  2017 test
$



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/SSSRIMARIN-seminar-May-18-2018-Shanghai.htm

________
From: Klaij, Christiaan
Sent: Thursday, March 22, 2018 2:43 PM
To: petsc-users
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

OK, configure works with the option --with-cc=/path/to/mpicc,
thanks!

I'm not sure you are right about checking --with-mpi-dir/bin,
because, at least with this install, the binaries are located in
--with-mpi-dir/intel64/bin:

$ which mpicc
/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc

Perhaps something to do with intel supporting both 32 and
64bit. Anyway, our sysadmin didn't change any of these locations,
they just run the intel installer and this is what we got.



From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, March 22, 2018 2:18 PM
To: petsc-users
Cc: Klaij, Christiaan
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

Added this change to balay/mpi-dir-check-warning/maint and merged to next.

Satish

On Thu, 22 Mar 2018, Satish Balay wrote:

> The relevant change that is causing the difference is:
>
> https://bitbucket.org/petsc/petsc/commits/a98758b74fbc47f3dca87b526141d347301fd5eb
>
> Perhaps we should print a warning if with-mpi-dir/bin is missing.
>
> $ ./configure --with-mpi-dir=$HOME/tmp
> ===
>  Configuring PETSc to compile on your system
> ===
> ===
>   
> * WARNING: /home/balay/tmp/bin dir does not exist!
>   
> Skipping check for MPI compilers due to potentially incorrect --with-mpi-dir 
> option.   Suggest 
> using --with-cc=/path/to/mpicc option instead **  
>   
> ===
>
> Satish
>
> -
>
> $ git diff
> diff --git a/config/BuildSystem/config/setCompilers.py 
> b/config/BuildSystem/config/setCompilers.py
> index 3b4b723e71..1ca92d309a 100644
> --- a/config/BuildSystem/config/setCompilers.py
> +++ b/config/BuildSystem/config/setCompilers.py
> @@ -515,6 +515,8 @@ class Configure(config.base.Configure):
>if self.useMPICompilers() and 'with-mpi-dir' in self.argDB:
># if it gets here these means that self.argDB['with-mpi-dir']/bin does 
> not exist so we should not search for MPI compilers
># that is we are turning off the self.useMPICompilers()
> +self.logPrintBox('* WARNING: 
> '+os.path.join(self.argDB['with-mpi-dir'], 'bin')+ ' dir does not exist!\n 
> Skipping check for MPI compilers due to potentially incorrect --with-mpi-dir 
> option.\n Suggest using --with-cc=/path/to/mpicc option instead **')
> +
>  self.argDB['with-mpi-compilers'] = 0
>if self.useMPICompilers():
>  self.usedMPICompilers = 1
>
>
> On Thu, 22 Mar 2018, Satish Balay wrote:
>
> > In either case 
> > --mpi-dir=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi is the 
> > wrong option.
> >
> > Its a shortcut for 
> > --with-cc=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/bin/mpicc 
> > - which you
> > don't have.
> >
> > Since you have mpicc in your path - you can just use:
> >
> > --with-cc=mpicc [or skip specifying it - and configure will look at your 
> > path]
> >
> > Satish
> >
> > On Thu, 22 Mar 2018, Klaij, Christiaan wrote:
> >
> > > Matt,
> > >
> > > The pro

Re: [petsc-users] problem configuring 3.8.3 with intel mpi

2018-03-22 Thread Klaij, Christiaan
OK, configure works with the option --with-cc=/path/to/mpicc,
thanks!

I'm not sure you are right about checking --with-mpi-dir/bin,
because, at least with this install, the binaries are located in
--with-mpi-dir/intel64/bin:

$ which mpicc
/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/bin/mpicc

Perhaps something to do with intel supporting both 32 and
64bit. Anyway, our sysadmin didn't change any of these locations,
they just run the intel installer and this is what we got.




dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Office-tool-aNySIM-integrated-into-realtime-onboard-software-1.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, March 22, 2018 2:18 PM
To: petsc-users
Cc: Klaij, Christiaan
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

Added this change to balay/mpi-dir-check-warning/maint and merged to next.

Satish

On Thu, 22 Mar 2018, Satish Balay wrote:

> The relevant change that is causing the difference is:
>
> https://bitbucket.org/petsc/petsc/commits/a98758b74fbc47f3dca87b526141d347301fd5eb
>
> Perhaps we should print a warning if with-mpi-dir/bin is missing.
>
> $ ./configure --with-mpi-dir=$HOME/tmp
> ===
>  Configuring PETSc to compile on your system
> ===
> ===
>   
> * WARNING: /home/balay/tmp/bin dir does not exist!
>   
> Skipping check for MPI compilers due to potentially incorrect --with-mpi-dir 
> option.   Suggest 
> using --with-cc=/path/to/mpicc option instead **  
>   
> ===
>
> Satish
>
> -
>
> $ git diff
> diff --git a/config/BuildSystem/config/setCompilers.py 
> b/config/BuildSystem/config/setCompilers.py
> index 3b4b723e71..1ca92d309a 100644
> --- a/config/BuildSystem/config/setCompilers.py
> +++ b/config/BuildSystem/config/setCompilers.py
> @@ -515,6 +515,8 @@ class Configure(config.base.Configure):
>if self.useMPICompilers() and 'with-mpi-dir' in self.argDB:
># if it gets here these means that self.argDB['with-mpi-dir']/bin does 
> not exist so we should not search for MPI compilers
># that is we are turning off the self.useMPICompilers()
> +self.logPrintBox('* WARNING: 
> '+os.path.join(self.argDB['with-mpi-dir'], 'bin')+ ' dir does not exist!\n 
> Skipping check for MPI compilers due to potentially incorrect --with-mpi-dir 
> option.\n Suggest using --with-cc=/path/to/mpicc option instead **')
> +
>  self.argDB['with-mpi-compilers'] = 0
>if self.useMPICompilers():
>  self.usedMPICompilers = 1
>
>
> On Thu, 22 Mar 2018, Satish Balay wrote:
>
> > In either case 
> > --mpi-dir=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi is the 
> > wrong option.
> >
> > Its a shortcut for 
> > --with-cc=/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/bin/mpicc 
> > - which you
> > don't have.
> >
> > Since you have mpicc in your path - you can just use:
> >
> > --with-cc=mpicc [or skip specifying it - and configure will look at your 
> > path]
> >
> > Satish
> >
> > On Thu, 22 Mar 2018, Klaij, Christiaan wrote:
> >
> > > Matt,
> > >
> > > The problem must be earlier, because it should be using mpicc for
> > > the check, not gcc. The mpi.h is found here by the 3.7.5 config:
> > >
> > > Executing: mpicc -E  -I/tmp/petsc-JIF0WC/config.setCompilers 
> > > -I/tmp/petsc-JIF0WC/config.types -I/tmp/petsc-JIF0WC/config.headers  
> > > -I/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/include 
> > > -I/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/include 
> > > /tmp/petsc-JIF0WC/config.headers/conftest.c
> > > stdout:
> > > # 1 "/tmp/petsc-JIF0WC/config.headers/conftest.c"
> > > # 1 "/tmp/petsc-JIF0WC/config.headers/confdefs.h" 1
> > > # 2 "/tmp/petsc-JIF0WC/config.headers/conftest.c" 2
> > > # 1 "/tmp/pets

Re: [petsc-users] problem configuring 3.8.3 with intel mpi

2018-03-22 Thread Klaij, Christiaan
Matt,

The problem must be earlier, because it should be using mpicc for
the check, not gcc. The mpi.h is found here by the 3.7.5 config:

Executing: mpicc -E  -I/tmp/petsc-JIF0WC/config.setCompilers 
-I/tmp/petsc-JIF0WC/config.types -I/tmp/petsc-JIF0WC/config.headers  
-I/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/include 
-I/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/include 
/tmp/petsc-JIF0WC/config.headers/conftest.c
stdout:
# 1 "/tmp/petsc-JIF0WC/config.headers/conftest.c"
# 1 "/tmp/petsc-JIF0WC/config.headers/confdefs.h" 1
# 2 "/tmp/petsc-JIF0WC/config.headers/conftest.c" 2
# 1 "/tmp/petsc-JIF0WC/config.headers/conffix.h" 1
# 3 "/tmp/petsc-JIF0WC/config.headers/conftest.c" 2
# 1 
"/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/intel64/include/mpi.h" 
1

Note that the machine and ENV are exactly the same, the only
change is 3.7.5 versus 3.8.3.

Chris

dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Office tool aNySIM integrated into real-time onboard 
software<http://www.marin.nl/web/News/News-items/Office-tool-aNySIM-integrated-into-realtime-onboard-software-1.htm>

________
From: Matthew Knepley <knep...@gmail.com>
Sent: Thursday, March 22, 2018 1:27 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] problem configuring 3.8.3 with intel mpi

On Thu, Mar 22, 2018 at 8:00 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Satish,

I'm trying to upgrade from 3.7.5 to 3.8.3. The first problem is
that my intel mpi installation, which works for 3.7.5, fails with
3.8.3, see the attached logs. It seems that the mpi compilers are
not found anymore. Any ideas?

The header check failed:

  Checking include with compiler flags var CPPFLAGS 
['/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/include']
Executing: gcc -E  -I/tmp/petsc-8GlPlN/config.setCompilers 
-I/tmp/petsc-8GlPlN/config.types -I/tmp/petsc-8GlPlN/config.headers  
-I/opt/intel/compilers_and_libraries_2017.1.132/linux/mpi/include 
/tmp/petsc-8GlPlN/config.headers/conftest.c
stdout:
# 1 "/tmp/petsc-8GlPlN/config.headers/conftest.c"
# 1 ""
# 1 ""
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 1 "" 2
# 1 "/tmp/petsc-8GlPlN/config.headers/conftest.c"
# 1 "/tmp/petsc-8GlPlN/config.headers/confdefs.h" 1
# 2 "/tmp/petsc-8GlPlN/config.headers/conftest.c" 2
# 1 "/tmp/petsc-8GlPlN/config.headers/conffix.h" 1
# 3 "/tmp/petsc-8GlPlN/config.headers/conftest.c" 2
Possible ERROR while running preprocessor: exit code 256
stdout:
# 1 "/tmp/petsc-8GlPlN/config.headers/conftest.c"
# 1 ""
# 1 ""
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 1 "" 2
# 1 "/tmp/petsc-8GlPlN/config.headers/conftest.c"
# 1 "/tmp/petsc-8GlPlN/config.headers/confdefs.h" 1
# 2 "/tmp/petsc-8GlPlN/config.headers/conftest.c" 2
# 1 "/tmp/petsc-8GlPlN/config.headers/conffix.h" 1
# 3 "/tmp/petsc-8GlPlN/config.headers/conftest.c" 2stderr:
/tmp/petsc-8GlPlN/config.headers/conftest.c:3:17: fatal error: mpi.h: No such 
file or directory
 #include 
 ^
compilation terminated.
Source:
#include "confdefs.h"
#include "conffix.h"
#include 
Preprocess stderr before 
filtering:/tmp/petsc-8GlPlN/config.headers/conftest.c:3:17: fatal error: mpi.h: 
No such file or directory
 #include 

Where are the headers?

  Thanks,

Matt

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Office-tool-aNySIM-integrated-into-realtime-onboard-software-1.htm




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.caam.rice.edu/~mk51/>




Re: [petsc-users] segfault after recent scientific linux upgrade

2017-12-07 Thread Klaij, Christiaan
Almost valgrind clean. We use intelmpi so we need a handfull of suppressions.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/GROW-partners-innovate-together-in-offshore-wind-industry.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, December 07, 2017 6:07 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] segfault after recent scientific linux upgrade

Could you check if your code is valgrind clean?

Satish

On Thu, 7 Dec 2017, Klaij, Christiaan wrote:

> Satish,
>
> As a first try, I've kept petsc-3.7.5 and only replaced superlu
> by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't
> fix the problem, see the backtrace below.
>
> Fande,
>
> Perhaps the problem is related to petsc, not superlu?
>
> What really puzzles me is that everything was working fine with
> petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we
> updated Scientific Linux 7. So this bug (in petsc or in superlu)
> was already there but somehow not triggered before the SL7
> update?
>
> Chris
>
> (gdb) bt
> #0  0x2b38995fa30c in mc64wd_dist (n=0x3da6230, ne=0x2, ip=0x1,
> irn=0x3d424e0, a=0x3d82220, iperm=0x1000, num=0x7ffc505dd294,
> jperm=0x3d7a220, out=0x3d7e220, pr=0x3d82220, q=0x3d86220, l=0x3d8a220,
> u=0x3d8e230, d__=0x3d96230)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:2322
> #1  0x2b38995f5f7b in mc64ad_dist (job=0x3da6230, n=0x2, ne=0x1,
> ip=0x3d424e0, irn=0x3d82220, a=0x1000, num=0x7ffc505dd2b0,
> cperm=0x3d8e230, liw=0x3d1acd0, iw=0x3d560f0, ldw=0x3d424e0, dw=0x3d0e530,
> icntl=0x3d7a220, info=0x2b3899615546 <dldperm_dist+614>)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:596
> #2  0x2b3899615546 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x3d424e0,
> adjncy=0x3d82220, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x3d0e001)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/dldperm_dist.c:141
> #3  0x2b389960d286 in pdgssvx_ABglobal (options=0x3da6230, A=0x2,
> ScalePermstruct=0x1, B=0x3d424e0, ldb=64496160, nrhs=4096, grid=0x3d009f0,
> LUstruct=0x3d0df00, berr=0x1000,
> stat=0x2b389851da7d <MatLUFactorNumeric_SuperLU_DIST+2349>, 
> info=0x3d0df18)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/pdgssvx_ABglobal.c:716
> #4  0x2b389851da7d in MatLUFactorNumeric_SuperLU_DIST (F=0x3da6230, A=0x2,
> ---Type  to continue, or q  to quit---
> info=0x1)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419
> #5  0x2b389852ca1a in MatLUFactorNumeric (fact=0x3da6230, mat=0x2,
> info=0x1)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996
> #6  0x2b38988856c7 in PCSetUp_LU (pc=0x3da6230)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172
> #7  0x2b38987d4084 in PCSetUp (pc=0x3da6230)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968
> #8  0x2b389891068d in KSPSetUp (ksp=0x3da6230)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390
> #9  0x2b389890c7be in KSPSolve (ksp=0x3da6230, b=0x2, x=0x2d18d90)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599
> #10 0x2b3898925142 in kspsolve_ (ksp=0x3da6230, b=0x2, x=0x1,
> __ierr=0x3d424e0)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/ftn-auto/itfuncf.c:261
> ---Type  to continue, or q  to quit---
> #11 0x00bccf71 in petsc_solvers::petsc_solvers_solve (
> regname='massTransport', rhs_c=..., phi_c=..., tol=0.01, maxiter=500,
> res0=-9.2559631349317831e+61, usediter=0, .tmp.REGNAME.len_V$1790=13)
> at petsc_solvers.F90:580
> #12 0x00c2c9c5 in mass_momentum::mass_momentum_pressureprediction ()
> at mass_momentum.F90:989
> #13 0x00c0ffc1 in mass_momentum::mass_momentum_core ()
> at mass_momentum.F90:626
> #14 0x00c26a2c in mass_momentum::mass_momentum_systempcapply (
> aa_system=54952496, xx_system=47570896, rr_system=47572416, ierr=0)
> at mass_momentum.F90:919
> #15 0x2b3898891763 in ourshellapply (pc=0x3468230, x=0x2d5dfd0

Re: [petsc-users] segfault after recent scientific linux upgrade

2017-12-07 Thread Klaij, Christiaan
Fande,


That's what I did, I started the whole petsc config and build from scratch. The 
backtrace now says:


/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/superlu_dist-xsdk-0.2.0-rc1/SRC/mc64ad_dist.c:2322


instead of the old:


/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322


which doesn't exist anymore. The entire install directory is new:


$ ls -lh /home/cklaij/ReFRESCO/Dev/trunk/Libs/install
drwxr-xr-x. 5 cklaij domain users   85 Dec  7 14:31 Linux-x86_64-Intel
drwxr-xr-x. 6 cklaij domain users   75 Dec  7 14:33 petsc-3.7.5


$ ls -lh /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel
total 12K
drwxr-xr-x.  7 cklaij domain users 4.0K Dec  7 14:30 metis-5.1.0-p3
drwxrwxr-x.  7 cklaij domain users 4.0K Dec  7 14:30 parmetis-4.0.3-p3
drwxrwxr-x. 11 cklaij domain users 4.0K Dec  7 14:31 superlu_dist-xsdk-0.2.0-rc1

Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: MARIN at Marintec China, Shanghai, December 
5-8<http://www.marin.nl/web/News/News-items/MARIN-at-Marintec-China-Shanghai-December-58-1.htm>


From: Kong, Fande <fande.k...@inl.gov>
Sent: Thursday, December 07, 2017 4:26 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] segfault after recent scientific linux upgrade



On Thu, Dec 7, 2017 at 8:15 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Satish,



As a first try, I've kept petsc-3.7.5 and only replaced superlu

by the new xsdk-0.2.0-rc1 version. Unfortunately, this doesn't

fix the problem, see the backtrace below.



Fande,



Perhaps the problem is related to petsc, not superlu?



What really puzzles me is that everything was working fine with

petsc-3.7.5 and superlu_dist_5.3.1, it only broke after we

updated Scientific Linux 7. So this bug (in petsc or in superlu)

was already there but somehow not triggered before the SL7

update?



Chris




I do not know how you installed PETSc. It looks like you are keeping using the 
old superlu_dist.  You have to delete the old package, and start from the 
scratch. PETSc does not automatically clean the old one. For me, I just simply 
"rm -rf $PETSC_ARCH" every time before I reinstall a new PETSc.


Fande,






Re: [petsc-users] segfault after recent scientific linux upgrade

2017-12-07 Thread Klaij, Christiaan
ass_momentum::mass_momentum_solve ()
at mass_momentum.F90:465
#23 0x0041b5ec in refresco () at refresco.F90:259
#24 0x0041999e in main ()
#25 0x2b38a067fc05 in __libc_start_main () from /lib64/libc.so.6
#26 0x004198a3 in _start ()
(gdb)



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Simulator-facility-in-Houston-as-bridge-between-engineering-and-operations.htm

________
From: Klaij, Christiaan
Sent: Thursday, December 07, 2017 12:02 PM
To: petsc-users
Cc: Fande Kong
Subject: Re: [petsc-users] segfault after recent scientific linux upgrade

Thanks Satish, I will give it shot and let you know.

Chris

From: Satish Balay <ba...@mcs.anl.gov>
Sent: Wednesday, December 06, 2017 6:05 PM
To: Klaij, Christiaan
Cc: Fande Kong; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] segfault after recent scientific linux upgrade

petsc 3.7 - and 3.8 both default to superlu_dist snapshot:

self.gitcommit = 'xsdk-0.2.0-rc1'

If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+]
[3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick 
to 3.7.5]

But if you really want to stick to 3.7.5 you can use:

--download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1

Satish

On Wed, 6 Dec 2017, Klaij, Christiaan wrote:

> Fande,
>
> Thanks, that's good to know. Upgrading to 3.8.x is definitely my
> long-term plan, but is there anything I can do short-term to fix
> the problem while keeping 3.7.5?
>
> Chris
>
> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
> www.marin.nl<http://www.marin.nl>
>
> [LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
> <http://www.youtube.com/marinmultimedia>  [Twitter] 
> <https://twitter.com/MARIN_nieuws>  [Facebook] 
> <https://www.facebook.com/marin.wageningen>
> MARIN news: Seminar ‘Blauwe toekomst: versnellen van innovaties door 
> samenwerken<http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm>
>
> ____
> From: Fande Kong <fdkong...@gmail.com>
> Sent: Tuesday, December 05, 2017 4:30 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] segfault after recent scientific linux upgrade
>
> I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. 
> It is a known bug related to the reuse of the factorization pattern.
>
>
> Fande,
>
> On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan 
> <c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
> I'm running production software with petsc-3.7.5 and, among
> others, superlu_dist 5.1.3 on scientific linux 7.4.
>
> After a recent update of SL7.4, notably of the kernel and glibc,
> we found that superlu is somehow broken. Below's a backtrace of a
> serial example. Is this a known issue? Could you please advice on
> how to proceed (preferably while keeping 3.7.5 for now).
>
> Thanks,
> Chris
>
> $ gdb ./refresco ./core.9810
> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-redhat-linux-gnu".
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>...
> Reading symbols from 
> /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done.
> [New LWP 9810]
> Missing separate debuginfo for 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10
> Try: yum --enablerepo='*debug*' install 
> /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug
> Missing separate debuginfo for 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10
> Try: yum --enablerepo='*debug*' install 
> /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `./refresco'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x2ba501c132bc in mc64wd_dist (n=0x5

Re: [petsc-users] segfault after recent scientific linux upgrade

2017-12-07 Thread Klaij, Christiaan
Thanks Satish, I will give it shot and let you know.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/MARIN-at-Marintec-China-Shanghai-December-58-1.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Wednesday, December 06, 2017 6:05 PM
To: Klaij, Christiaan
Cc: Fande Kong; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] segfault after recent scientific linux upgrade

petsc 3.7 - and 3.8 both default to superlu_dist snapshot:

self.gitcommit = 'xsdk-0.2.0-rc1'

If using petsc-3.7 - you can use latest maint-3.7 [i.e 3.7.7+]
[3.7.7 is a latest bugfix update to 3.7 - so there should be no reason to stick 
to 3.7.5]

But if you really want to stick to 3.7.5 you can use:

--download-superlu_dist=1 --download-superlu_dist-commit=xsdk-0.2.0-rc1

Satish

On Wed, 6 Dec 2017, Klaij, Christiaan wrote:

> Fande,
>
> Thanks, that's good to know. Upgrading to 3.8.x is definitely my
> long-term plan, but is there anything I can do short-term to fix
> the problem while keeping 3.7.5?
>
> Chris
>
> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
> www.marin.nl<http://www.marin.nl>
>
> [LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
> <http://www.youtube.com/marinmultimedia>  [Twitter] 
> <https://twitter.com/MARIN_nieuws>  [Facebook] 
> <https://www.facebook.com/marin.wageningen>
> MARIN news: Seminar ‘Blauwe toekomst: versnellen van innovaties door 
> samenwerken<http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm>
>
> ________
> From: Fande Kong <fdkong...@gmail.com>
> Sent: Tuesday, December 05, 2017 4:30 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] segfault after recent scientific linux upgrade
>
> I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. 
> It is a known bug related to the reuse of the factorization pattern.
>
>
> Fande,
>
> On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan 
> <c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
> I'm running production software with petsc-3.7.5 and, among
> others, superlu_dist 5.1.3 on scientific linux 7.4.
>
> After a recent update of SL7.4, notably of the kernel and glibc,
> we found that superlu is somehow broken. Below's a backtrace of a
> serial example. Is this a known issue? Could you please advice on
> how to proceed (preferably while keeping 3.7.5 for now).
>
> Thanks,
> Chris
>
> $ gdb ./refresco ./core.9810
> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
> Copyright (C) 2013 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-redhat-linux-gnu".
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>...
> Reading symbols from 
> /home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done.
> [New LWP 9810]
> Missing separate debuginfo for 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10
> Try: yum --enablerepo='*debug*' install 
> /usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug
> Missing separate debuginfo for 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10
> Try: yum --enablerepo='*debug*' install 
> /usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `./refresco'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x2ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1,
> irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94,
> jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260,
> u=0x51fb270, d__=0x5203270)
> at 
> /home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322
> 2322if (iperm[i__] != 0 || iperm[i0] == 0) {
> Missing separate debuginfos, use: debuginfo-install 
> bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.

Re: [petsc-users] segfault after recent scientific linux upgrade

2017-12-05 Thread Klaij, Christiaan
Fande,

Thanks, that's good to know. Upgrading to 3.8.x is definitely my
long-term plan, but is there anything I can do short-term to fix
the problem while keeping 3.7.5?

Chris

dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Seminar ‘Blauwe toekomst: versnellen van innovaties door 
samenwerken<http://www.marin.nl/web/News/News-items/Seminar-Blauwe-toekomst-versnellen-van-innovaties-door-samenwerken.htm>


From: Fande Kong <fdkong...@gmail.com>
Sent: Tuesday, December 05, 2017 4:30 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] segfault after recent scientific linux upgrade

I would like to suggest you to use PETSc-3.8.x. Then the bug should go away. It 
is a known bug related to the reuse of the factorization pattern.


Fande,

On Tue, Dec 5, 2017 at 8:07 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
I'm running production software with petsc-3.7.5 and, among
others, superlu_dist 5.1.3 on scientific linux 7.4.

After a recent update of SL7.4, notably of the kernel and glibc,
we found that superlu is somehow broken. Below's a backtrace of a
serial example. Is this a known issue? Could you please advice on
how to proceed (preferably while keeping 3.7.5 for now).

Thanks,
Chris

$ gdb ./refresco ./core.9810
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from 
/home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done.
[New LWP 9810]
Missing separate debuginfo for 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10
Try: yum --enablerepo='*debug*' install 
/usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug
Missing separate debuginfo for 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10
Try: yum --enablerepo='*debug*' install 
/usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `./refresco'.
Program terminated with signal 11, Segmentation fault.
#0  0x2ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1,
irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94,
jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260,
u=0x51fb270, d__=0x5203270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322
2322if (iperm[i__] != 0 || iperm[i0] == 0) {
Missing separate debuginfos, use: debuginfo-install 
bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 
keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 
libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 
libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 
libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 
pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  0x2ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1,
irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94,
jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260,
u=0x51fb270, d__=0x5203270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322
#1  0x2ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1,
ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0,
cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570,
icntl=0x51e7260, info=0x2ba501c2e556 <dldperm_dist+614>)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596
#2  0x2ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520,
adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141
#3  0x2ba501c26296 in pdgssvx_ABglobal (options=0x5

[petsc-users] segfault after recent scientific linux upgrade

2017-12-05 Thread Klaij, Christiaan
I'm running production software with petsc-3.7.5 and, among
others, superlu_dist 5.1.3 on scientific linux 7.4.

After a recent update of SL7.4, notably of the kernel and glibc,
we found that superlu is somehow broken. Below's a backtrace of a
serial example. Is this a known issue? Could you please advice on
how to proceed (preferably while keeping 3.7.5 for now).

Thanks,
Chris

$ gdb ./refresco ./core.9810
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from 
/home/cklaij/ReFRESCO/Dev/trunk/Suites/testSuite/FlatPlate_laminar/calcs/Grid64x64/refresco...done.
[New LWP 9810]
Missing separate debuginfo for 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libssl.so.10
Try: yum --enablerepo='*debug*' install 
/usr/lib/debug/.build-id/68/6a25d0a83d002183c835fa5694a8110c78d3bc.debug
Missing separate debuginfo for 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/licensing-1.55.0/sll/lib64/libcrypto.so.10
Try: yum --enablerepo='*debug*' install 
/usr/lib/debug/.build-id/68/d2958189303f421b1082abc33fd87338826c65.debug
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `./refresco'.
Program terminated with signal 11, Segmentation fault.
#0  0x2ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1,
irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94,
jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260,
u=0x51fb270, d__=0x5203270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322
2322if (iperm[i__] != 0 || iperm[i0] == 0) {
Missing separate debuginfos, use: debuginfo-install 
bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-196.el7.x86_64 
keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 
libcom_err-1.42.9-10.el7.x86_64 libgcc-4.8.5-16.el7.x86_64 
libselinux-2.5-11.el7.x86_64 libstdc++-4.8.5-16.el7.x86_64 
libxml2-2.9.1-6.el7_2.3.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 
pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt
#0  0x2ba501c132bc in mc64wd_dist (n=0x5213270, ne=0x2, ip=0x1,
irn=0x51af520, a=0x51ef260, iperm=0x1000, num=0x7ffc545b2d94,
jperm=0x51e7260, out=0x51eb260, pr=0x51ef260, q=0x51f3260, l=0x51f7260,
u=0x51fb270, d__=0x5203270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:2322
#1  0x2ba501c0ef2b in mc64ad_dist (job=0x5213270, n=0x2, ne=0x1,
ip=0x51af520, irn=0x51ef260, a=0x1000, num=0x7ffc545b2db0,
cperm=0x51fb270, liw=0x5187d10, iw=0x51c3130, ldw=0x51af520, dw=0x517b570,
icntl=0x51e7260, info=0x2ba501c2e556 )
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/mc64ad_dist.c:596
#2  0x2ba501c2e556 in dldperm_dist (job=0, n=0, nnz=0, colptr=0x51af520,
adjncy=0x51ef260, nzval=0x1000, perm=0x4f00, u=0x1000, v=0x517b001)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/dldperm_dist.c:141
#3  0x2ba501c26296 in pdgssvx_ABglobal (options=0x5213270, A=0x2,
ScalePermstruct=0x1, B=0x51af520, ldb=85914208, nrhs=4096, grid=0x516da30,
LUstruct=0x517af40, berr=0x1000,
stat=0x2ba500b36a7d , info=0x517af58)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/Linux-x86_64-Intel/SuperLU_DIST_5.1.3/SRC/pdgssvx_ABglobal.c:716
#4  0x2ba500b36a7d in MatLUFactorNumeric_SuperLU_DIST (F=0x5213270, A=0x2,
---Type  to continue, or q  to quit---
info=0x1)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/impls/aij/mpi/superlu_dist/superlu_dist.c:419
#5  0x2ba500b45a1a in MatLUFactorNumeric (fact=0x5213270, mat=0x2,
info=0x1)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/mat/interface/matrix.c:2996
#6  0x2ba500e9e6c7 in PCSetUp_LU (pc=0x5213270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/impls/factor/lu/lu.c:172
#7  0x2ba500ded084 in PCSetUp (pc=0x5213270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/pc/interface/precon.c:968
#8  0x2ba500f2968d in KSPSetUp (ksp=0x5213270)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:390
#9  0x2ba500f257be in KSPSolve (ksp=0x5213270, b=0x2, x=0x4193510)
at 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.5/src/ksp/ksp/interface/itfunc.c:599
#10 

Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

2017-08-28 Thread Klaij, Christiaan
Hi Jed,

Thanks for clarifying, I understand the two-sided bound and the
condition number. So for left preconditioning we would have P_L A
instead of A in this bound and P_L r instead of r. For right
preconditioning it is less obvious to me. How much would that
loosen the bound?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Improved-modelling-of-sheet-cavitation-dynamics-on-Delft-Twistll-Hydrofoil-1.htm


From: Jed Brown <j...@jedbrown.org>
Sent: Thursday, August 24, 2017 6:01 PM
To: Klaij, Christiaan; Matthew Knepley; Barry Smith
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

"Klaij, Christiaan" <c.kl...@marin.nl> writes:

> Matt,
>
> Thanks, I can understand the lower condition number of P A, but
> what about r? Doesn't that change to P r and if so why can we
> assume that ||r|| and ||P r|| have the same order?

Matt's equation was of course wrong in a literal sense, but is based on
the right moral convictions.  We have

  r = A (x - x_exact)

which implies that

  ||r|| <= ||A|| || x - x_exact ||.

We also have

  x - x_exact = A^{-1} r

so

  || x - x_exact || <= || A^{-1} || ||r||.

Combining these gives the two-sided bound

  || A ||^{-1} ||r|| <= || x - x_exact || <= || A^{-1} || ||r||.

The ratio of the high and low bounds is the condition number.  Our
convergence tolerance controls the norm of the residual (in a relative
or absolute sense).  If the condition number is smaller, we get tighter
control on the error.  If you are confident that your preconditioner
reduces the condition number, it makes sense to measure convergence in
the preconditioned norm (this is natural with left preconditioning for
GMRES), in which the bounds above are in terms of the preconditioned
operator.

The main reason for PETSc to keep the preconditioned norm as a default
is that many users, especially beginners, produce poorly scaled
equations, e.g., with penalty boundary conditions or with disparate
scales between fields with different units (displacement, pressure,
velocity, energy, etc.).  You will often see the unpreconditioned
residual drop by 10 orders of magnitude on the first iteration because
the penalty boundary conditions are satisfied despite the solution being
completely wrong inside the domain.

On the other hand, the preconditioned residual causes misdiagnosis of
convergence if the preconditioner is (nearly) singular, as is the case
with some preconditioners applied to saddle point problems, for example.
But this is easy to identify using -ksp_monitor_true_residual.


>
> Chris
>
>
> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
> www.marin.nl<http://www.marin.nl>
>
> [LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
> <http://www.youtube.com/marinmultimedia>  [Twitter] 
> <https://twitter.com/MARIN_nieuws>  [Facebook] 
> <https://www.facebook.com/marin.wageningen>
> MARIN news: New C-DRONE - for undisturbed wave spectrum 
> measurements<http://www.marin.nl/web/News/News-items/New-CDRONE-for-undisturbed-wave-spectrum-measurements-1.htm>
>
> ________
> From: Matthew Knepley <knep...@gmail.com>
> Sent: Wednesday, August 23, 2017 8:37 AM
> To: Barry Smith
> Cc: Klaij, Christiaan; petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?
>
> On Wed, Aug 23, 2017 at 2:30 AM, Barry Smith 
> <bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>> wrote:
>
>Some argue that the preconditioned residual is "closer to" the norm of the 
> error than the unpreconditioned norm. I don't have a solid mathematical 
> reason to prefer left preconditioning with the preconditioned norm.
>
> Because you have || x - x_exact || < k(A) || r ||
>
> where r is the residual and k is the condition number of A. If instead of A 
> you use P A, which we assume has a lower condition number, then
> this bound is improved.
>
>   Thanks,
>
>  Matt
>
>
>Barry
>
>
>
>> On Aug 22, 2017, at 11:27 PM, Klaij, Christiaan 
>> <c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
>>
>> Barry,
>>
>> Thanks for the explanation.
>>
>> We do have some rare cases that give false convergence, but
>> decided to use
>>
>> CALL KSPSetNormType(ksp,KSP_NORM_UNPRECONDITIONED,ierr)
>>
>> so that convergence is always based on the true residual. Our
>> results are much more 

Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

2017-08-24 Thread Klaij, Christiaan
Matt,

Thanks, I can understand the lower condition number of P A, but
what about r? Doesn't that change to P r and if so why can we
assume that ||r|| and ||P r|| have the same order?

Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: New C-DRONE - for undisturbed wave spectrum 
measurements<http://www.marin.nl/web/News/News-items/New-CDRONE-for-undisturbed-wave-spectrum-measurements-1.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Wednesday, August 23, 2017 8:37 AM
To: Barry Smith
Cc: Klaij, Christiaan; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

On Wed, Aug 23, 2017 at 2:30 AM, Barry Smith 
<bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>> wrote:

   Some argue that the preconditioned residual is "closer to" the norm of the 
error than the unpreconditioned norm. I don't have a solid mathematical reason 
to prefer left preconditioning with the preconditioned norm.

Because you have || x - x_exact || < k(A) || r ||

where r is the residual and k is the condition number of A. If instead of A you 
use P A, which we assume has a lower condition number, then
this bound is improved.

  Thanks,

 Matt


   Barry



> On Aug 22, 2017, at 11:27 PM, Klaij, Christiaan 
> <c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
>
> Barry,
>
> Thanks for the explanation.
>
> We do have some rare cases that give false convergence, but
> decided to use
>
> CALL KSPSetNormType(ksp,KSP_NORM_UNPRECONDITIONED,ierr)
>
> so that convergence is always based on the true residual. Our
> results are much more consistent now. So that could have been
> your protection against the rare case as well, right? Why do you
> prefer left preconditioning?
>
> Chris
>
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044> | 
> mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/BlueWeek-October-911-Rostock.htm
>
> 
> From: Barry Smith <bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>>
> Sent: Tuesday, August 22, 2017 6:25 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
> Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?
>
>> On Aug 22, 2017, at 6:49 AM, Klaij, Christiaan 
>> <c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
>>
>> We also faced this problem in our code. So I've added:
>>
>> CALL 
>> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sub_pc_factor_shift_type","nonzero",ierr)
>>
>> since there seems to be no setter function for this (correct me
>> if I'm wrong). Then everythings fine again.
>>
>> Out of curiosity, what was the reason to change the default
>> behaviour?
>
>   The reason we changed this is that we would rather have a failure that 
> makes the user aware of a serious problem then to produce "garbage" results. 
> In some rare cases the shift can cause a huge jump in the preconditioned 
> residual which then decreases rapidly while the true residual does not 
> improve. This results in the KSP thinking it has converged while in fact it 
> has essentially garbage for an answer. Under the previous model, where we 
> shifted by default, users would in this rare case think they had reasonable 
> solutions when they did not.
>
>   For many users, such as yourself, the previous default behavior was fine 
> because you didn't have the "rare case" but we decided it was best to protect 
> against the rare case even though it would require other users such as 
> yourself to add the option.
>
>   Barry




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/




Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

2017-08-23 Thread Klaij, Christiaan
Lawrence,

Well, I was looking for something like SubPCFactorSetShiftType,
or is it supposed to somehow "trickle down" to the sub pc's?

Or should I setup the pc, get the sub-pc's and then apply
PCFactorSetShiftType on the sub pc's, something like

CALL KSPSetup(ksp)
CALL PCBJacobiGetSubKSP(ksp, subksp)
CALL KSPGetPC(subksp, subpc, ierr)
CALL PCFactorSetShiftType(subpc,MAT_SHIFT_NONZERO,ierr)

Chris

> Message: 2
> Date: Tue, 22 Aug 2017 15:17:58 +0100
> From: Lawrence Mitchell <lawrence.mitch...@imperial.ac.uk>
> To: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and
> 3.7.x?
> Message-ID: <14d4c158-95b1-96ab-90b7-b81d3239b...@imperial.ac.uk>
> Content-Type: text/plain; charset=utf-8
>
>
>
> On 22/08/17 14:49, Klaij, Christiaan wrote:
> > CALL 
> > PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sub_pc_factor_shift_type","nonzero",ierr)
> >
> > since there seems to be no setter function for this (correct me
> > if I'm wrong).
>
> I think you want PCFactorSetShiftType.
>
> Lawrence


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/AMT17-October-1113-Glasgow.htm



Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

2017-08-23 Thread Klaij, Christiaan
Barry,

Thanks for the explanation.

We do have some rare cases that give false convergence, but
decided to use

 CALL KSPSetNormType(ksp,KSP_NORM_UNPRECONDITIONED,ierr)

so that convergence is always based on the true residual. Our
results are much more consistent now. So that could have been
your protection against the rare case as well, right? Why do you
prefer left preconditioning?

Chris



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/BlueWeek-October-911-Rostock.htm


From: Barry Smith <bsm...@mcs.anl.gov>
Sent: Tuesday, August 22, 2017 6:25 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

> On Aug 22, 2017, at 6:49 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>
> We also faced this problem in our code. So I've added:
>
> CALL 
> PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sub_pc_factor_shift_type","nonzero",ierr)
>
> since there seems to be no setter function for this (correct me
> if I'm wrong). Then everythings fine again.
>
> Out of curiosity, what was the reason to change the default
> behaviour?

   The reason we changed this is that we would rather have a failure that makes 
the user aware of a serious problem then to produce "garbage" results. In some 
rare cases the shift can cause a huge jump in the preconditioned residual which 
then decreases rapidly while the true residual does not improve. This results 
in the KSP thinking it has converged while in fact it has essentially garbage 
for an answer. Under the previous model, where we shifted by default, users 
would in this rare case think they had reasonable solutions when they did not.

   For many users, such as yourself, the previous default behavior was fine 
because you didn't have the "rare case" but we decided it was best to protect 
against the rare case even though it would require other users such as yourself 
to add the option.

   Barry


Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and 3.7.x?

2017-08-22 Thread Klaij, Christiaan
We also faced this problem in our code. So I've added:

CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sub_pc_factor_shift_type","nonzero",ierr)

since there seems to be no setter function for this (correct me
if I'm wrong). Then everythings fine again.

Out of curiosity, what was the reason to change the default
behaviour? It seems to have really reduced the robustness of
PCBJACOBI.

> Message: 2
> Date: Fri, 11 Aug 2017 12:28:43 -0500
> From: Barry Smith 
> To: Gaetan Kenway 
> Cc: petsc-users 
> Subject: Re: [petsc-users] Petsc ILU PC Change between 3.6.4 and
> 3.7.x?
> Message-ID: 
> Content-Type: text/plain; charset="us-ascii"
>
>
>Run with the additional option
>
>  -sub_pc_factor_shift_type nonzero
>
> does this resolve the problem. We changed the default behavior for ILU when 
> it detects "zero" pivots.
>
> Please let us know if this resolves the problem and we'll update the changes 
> file.
>
>Barry


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Improved-modelling-of-sheet-cavitation-dynamics-on-Delft-Twistll-Hydrofoil-1.htm



Re: [petsc-users] Fw: left and right preconditioning with a constant null space

2017-03-28 Thread Klaij, Christiaan
Matt,


Yes, null space vector attached to the large matrix and
consistent rhs. This seems to be what Barry wants (or I
misunderstood his previous email)

a) that was Lawrence's suggestion as well, using
petscObjectCompose, but that doesn't seem to work in fortran as I
reported earlier.

b) Good to know, but so far I don't have a DM.

c) same problem as a)

I understand your last point about pulling apart the global null
vector. Then again how would a user know the null space of any
submatrices that arise somewhere within PCFieldSplit?


Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Meet us again at the OTC 
2017<http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Tuesday, March 28, 2017 3:27 PM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; petsc-users@mcs.anl.gov
Subject: Re: Fw: [petsc-users] left and right preconditioning with a constant 
null space

On Tue, Mar 28, 2017 at 8:19 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Barry,

That seems by far the best way to proceed! As a user I'm
responsible for the velocity-pressure matrix and its null space,
all the rest is up to PCFieldSplit. But unfortunately it doesn't
work:

I've constructed the null space [u,p]=[0,1], attached it to the
velocity-pressure matrix and verified it by MatNullSpaceTest. I'm
making sure the rhs is consistent with "MatNullSpaceRemove".

However, the null space doesn't seem to propagate to the Schur
complement, which therefore doesn't converge, see
attachment "out1".

When I attach the constant null space directly to A11, it does
reach the Schur complement and I do get convergence, see
attachment "out2".

So you attach a null space vector to the large matrix, and have a consistent 
rhs?
This is not quite what we want. If you

  a) Had a consistent rhs and attached the constant nullspace vector to the 
pressure IS, then things will work

  b) Had a consistent rhs and attached the constant nullspace vector to the 
"field" object from a DM, it should work

  c) Attached the global nullspace vector to A^T and the constant nullspace to 
the pressure IS, it should work

We can't really pull apart the global null vector because there is no guarantee 
that its the nullspace of the submatrix.

  Thanks,

 Matt

Chris


From: Barry Smith <bsm...@mcs.anl.gov<mailto:bsm...@mcs.anl.gov>>
Sent: Monday, March 27, 2017 6:35 PM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; Matthew Knepley; 
petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

> On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan 
> <c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
>
> Barry,
>
> I removed the null space from the rhs in the debug program that I
> wrote to just solve Sp x = b once. In this debug program I've
> constructed Sp myself after reading in the four blocks from the
> real program. So this is independent of PCFieldSplit. Indeed I
> also see bad convergence when using pc_type svd for this debug
> program unless I remove the null space from the rhs.
>
> So far I haven't managed to translate any of this to the real
> program.
>
> - Setting the null space for Sp in the real program seems to work
>  by happy accident, but Lawrence gave me the hint to
>  use "PetscObjectCompose" to set the nullspace using is1.
>
> - I still have to understand Lawrence's hint and Matt's comment
>  about MatSetTransposeNullSpace.
>
> - I'm not sure how to remove the null space from the rhs vector
>  in the real pogram, since I have one rhs vector with both
>  velocity and pressure and the null space only refers to the
>  pressure part. Any hints?
>
> - Or should I set the null space for the velocity-pressure matrix
>  itself, instead of the Schur complement?

   I would first check if the entire full velocity-pressure right hand side is 
consistent. If it is not you can make it consistent by removing the transpose 
null space. You can use MatCreateNullSpace() to create the null space by 
passing in a vector that is constant on all the pressure variables and 0 on the 
velocity variables.

   Barry

>
> - Besides this, I'm also wondering why the rhs would be
>  inconsistent in the first place, it's hard to understand from
>  the discreti

[petsc-users] Fw: left and right preconditioning with a constant null space

2017-03-28 Thread Klaij, Christiaan
Barry,

That seems by far the best way to proceed! As a user I'm
responsible for the velocity-pressure matrix and its null space,
all the rest is up to PCFieldSplit. But unfortunately it doesn't
work:

I've constructed the null space [u,p]=[0,1], attached it to the
velocity-pressure matrix and verified it by MatNullSpaceTest. I'm
making sure the rhs is consistent with "MatNullSpaceRemove".

However, the null space doesn't seem to propagate to the Schur
complement, which therefore doesn't converge, see
attachment "out1".

When I attach the constant null space directly to A11, it does
reach the Schur complement and I do get convergence, see
attachment "out2".

Chris


From: Barry Smith <bsm...@mcs.anl.gov>
Sent: Monday, March 27, 2017 6:35 PM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; Matthew Knepley; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

> On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>
> Barry,
>
> I removed the null space from the rhs in the debug program that I
> wrote to just solve Sp x = b once. In this debug program I've
> constructed Sp myself after reading in the four blocks from the
> real program. So this is independent of PCFieldSplit. Indeed I
> also see bad convergence when using pc_type svd for this debug
> program unless I remove the null space from the rhs.
>
> So far I haven't managed to translate any of this to the real
> program.
>
> - Setting the null space for Sp in the real program seems to work
>  by happy accident, but Lawrence gave me the hint to
>  use "PetscObjectCompose" to set the nullspace using is1.
>
> - I still have to understand Lawrence's hint and Matt's comment
>  about MatSetTransposeNullSpace.
>
> - I'm not sure how to remove the null space from the rhs vector
>  in the real pogram, since I have one rhs vector with both
>  velocity and pressure and the null space only refers to the
>  pressure part. Any hints?
>
> - Or should I set the null space for the velocity-pressure matrix
>  itself, instead of the Schur complement?

   I would first check if the entire full velocity-pressure right hand side is 
consistent. If it is not you can make it consistent by removing the transpose 
null space. You can use MatCreateNullSpace() to create the null space by 
passing in a vector that is constant on all the pressure variables and 0 on the 
velocity variables.

   Barry

>
> - Besides this, I'm also wondering why the rhs would be
>  inconsistent in the first place, it's hard to understand from
>  the discretization.
>
> Thanks for your reply,
> Chris
>
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm
>
> 
> From: Barry Smith <bsm...@mcs.anl.gov>
> Sent: Saturday, March 25, 2017 1:29 AM
> To: Klaij, Christiaan
> Cc: Lawrence Mitchell; Matthew Knepley; petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] left and right preconditioning with a constant 
> null space
>
>> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>>
>> I've written a small PETSc program that loads the four blocks,
>> constructs Sp, attaches the null space and solves with a random
>> rhs vector.
>>
>> This small program replicates the same behaviour as the real
>> code: convergence in the preconditioned norm, stagnation in the
>> unpreconditioned norm.
>>
>> But when I add a call to remove the null space from the rhs
>> vector ("MatNullSpaceRemove"),
>
>   Are you removing the null space from the original full right hand side or 
> inside the solver for the Schur complement problem?
>
>   Note that if instead of using PCFIELDSPLIT you use some other simpler PC 
> you should also see bad convergence, do you? Even if you use -pc_type svd you 
> should see bad convergence?
>
>
>
>> I do get convergence in both
>> norms! Clearly, the real code must somehow produce an
>> inconsistent rhs vector. So the problem is indeed somewhere else
>> and not in PCFieldSplit.
>>
>> Chris
>>
>>
>>
>> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
>> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>>
>> MARIN news: 
>> http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm
>>
>> ___

Re: [petsc-users] left and right preconditioning with a constant null space

2017-03-27 Thread Klaij, Christiaan
Barry,

I removed the null space from the rhs in the debug program that I
wrote to just solve Sp x = b once. In this debug program I've
constructed Sp myself after reading in the four blocks from the
real program. So this is independent of PCFieldSplit. Indeed I
also see bad convergence when using pc_type svd for this debug
program unless I remove the null space from the rhs.

So far I haven't managed to translate any of this to the real
program.

- Setting the null space for Sp in the real program seems to work
  by happy accident, but Lawrence gave me the hint to
  use "PetscObjectCompose" to set the nullspace using is1.

- I still have to understand Lawrence's hint and Matt's comment
  about MatSetTransposeNullSpace.

- I'm not sure how to remove the null space from the rhs vector
  in the real pogram, since I have one rhs vector with both
  velocity and pressure and the null space only refers to the
  pressure part. Any hints?

- Or should I set the null space for the velocity-pressure matrix
  itself, instead of the Schur complement?

- Besides this, I'm also wondering why the rhs would be
  inconsistent in the first place, it's hard to understand from
  the discretization.

Thanks for your reply,
Chris



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm


From: Barry Smith <bsm...@mcs.anl.gov>
Sent: Saturday, March 25, 2017 1:29 AM
To: Klaij, Christiaan
Cc: Lawrence Mitchell; Matthew Knepley; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>
> I've written a small PETSc program that loads the four blocks,
> constructs Sp, attaches the null space and solves with a random
> rhs vector.
>
> This small program replicates the same behaviour as the real
> code: convergence in the preconditioned norm, stagnation in the
> unpreconditioned norm.
>
> But when I add a call to remove the null space from the rhs
> vector ("MatNullSpaceRemove"),

   Are you removing the null space from the original full right hand side or 
inside the solver for the Schur complement problem?

   Note that if instead of using PCFIELDSPLIT you use some other simpler PC you 
should also see bad convergence, do you? Even if you use -pc_type svd you 
should see bad convergence?



> I do get convergence in both
> norms! Clearly, the real code must somehow produce an
> inconsistent rhs vector. So the problem is indeed somewhere else
> and not in PCFieldSplit.
>
> Chris
>
>
>
> dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
>
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm
>
> 
> From: Klaij, Christiaan
> Sent: Friday, March 24, 2017 1:34 PM
> To: Lawrence Mitchell; Matthew Knepley
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] left and right preconditioning with a constant 
> null space
>
> I've also loaded the four blocks into matlab, computed
>
>  Sp = A11 - A10 inv(diag(A00)) A01
>
> and confirmed that Sp has indeed a constant null space.
>
> Chris
> 
> From: Klaij, Christiaan
> Sent: Friday, March 24, 2017 9:05 AM
> To: Lawrence Mitchell; Matthew Knepley
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] left and right preconditioning with a constant 
> null space
>
> Lawrence,
>
> I think you mean "-fieldsplit_1_mat_null_space_test"? This
> doesn't return any info, should it? Anyway, I've added a "call
> MatNullSpaceTest" to the code which returns "true" for the null
> space of A11.
>
> I also tried to run with "-fieldsplit_1_ksp_constant_null_space"
> so that the null space is only attached to S (and not to
> A11). Unfortunately, the behaviour is still the same: convergence
> in the preconditioned norm only.
>
> Chris
> 
> From: Lawrence Mitchell <lawrence.mitch...@imperial.ac.uk>
> Sent: Thursday, March 23, 2017 4:52 PM
> To: Klaij, Christiaan; Matthew Knepley
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] left and right preconditioning with a constant 
> null space
>
> On 23/03/17 15:37, Klaij, Christiaan wrote:
>> Yes, that's clearer, thanks! I do have is0 and is1 so I can try
>> PetscObjectCompose and let you know.
>>
>> Note though that the vi

Re: [petsc-users] left and right preconditioning with a constant null space

2017-03-24 Thread Klaij, Christiaan
I've written a small PETSc program that loads the four blocks,
constructs Sp, attaches the null space and solves with a random
rhs vector.

This small program replicates the same behaviour as the real
code: convergence in the preconditioned norm, stagnation in the
unpreconditioned norm.

But when I add a call to remove the null space from the rhs
vector ("MatNullSpaceRemove"), I do get convergence in both
norms! Clearly, the real code must somehow produce an
inconsistent rhs vector. So the problem is indeed somewhere else
and not in PCFieldSplit.

Chris



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm

____
From: Klaij, Christiaan
Sent: Friday, March 24, 2017 1:34 PM
To: Lawrence Mitchell; Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

I've also loaded the four blocks into matlab, computed

  Sp = A11 - A10 inv(diag(A00)) A01

and confirmed that Sp has indeed a constant null space.

Chris
____
From: Klaij, Christiaan
Sent: Friday, March 24, 2017 9:05 AM
To: Lawrence Mitchell; Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

Lawrence,

I think you mean "-fieldsplit_1_mat_null_space_test"? This
doesn't return any info, should it? Anyway, I've added a "call
MatNullSpaceTest" to the code which returns "true" for the null
space of A11.

I also tried to run with "-fieldsplit_1_ksp_constant_null_space"
so that the null space is only attached to S (and not to
A11). Unfortunately, the behaviour is still the same: convergence
in the preconditioned norm only.

Chris

From: Lawrence Mitchell <lawrence.mitch...@imperial.ac.uk>
Sent: Thursday, March 23, 2017 4:52 PM
To: Klaij, Christiaan; Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

On 23/03/17 15:37, Klaij, Christiaan wrote:
> Yes, that's clearer, thanks! I do have is0 and is1 so I can try
> PetscObjectCompose and let you know.
>
> Note though that the viewer reports that both S and A11 have a
> null space attached... My matrix is a matnest and I've attached a
> null space to A11, so the latter works as expected. But is the viewer
> wrong for S?

No, I think this is a consequence of using a matnest and attaching a
nullspace to A11.  In that case you sort of "can" set a nullspace on
the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because
you just get a reference.  But if you switched to AIJ then you would
no longer get this.

So it happens that the nullspace you set on A11 /is/ transferred over
to S, but this is luck, rather than design.

So maybe there is something else wrong.  Perhaps you can run with
-fieldsplit_1_ksp_test_null_space to check the nullspace matches
correctly?

Lawrence



Re: [petsc-users] left and right preconditioning with a constant null space

2017-03-24 Thread Klaij, Christiaan
I've also loaded the four blocks into matlab, computed

  Sp = A11 - A10 inv(diag(A00)) A01

and confirmed that Sp has indeed a constant null space.

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm


From: Klaij, Christiaan
Sent: Friday, March 24, 2017 9:05 AM
To: Lawrence Mitchell; Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

Lawrence,

I think you mean "-fieldsplit_1_mat_null_space_test"? This
doesn't return any info, should it? Anyway, I've added a "call
MatNullSpaceTest" to the code which returns "true" for the null
space of A11.

I also tried to run with "-fieldsplit_1_ksp_constant_null_space"
so that the null space is only attached to S (and not to
A11). Unfortunately, the behaviour is still the same: convergence
in the preconditioned norm only.

Chris

From: Lawrence Mitchell <lawrence.mitch...@imperial.ac.uk>
Sent: Thursday, March 23, 2017 4:52 PM
To: Klaij, Christiaan; Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

On 23/03/17 15:37, Klaij, Christiaan wrote:
> Yes, that's clearer, thanks! I do have is0 and is1 so I can try
> PetscObjectCompose and let you know.
>
> Note though that the viewer reports that both S and A11 have a
> null space attached... My matrix is a matnest and I've attached a
> null space to A11, so the latter works as expected. But is the viewer
> wrong for S?

No, I think this is a consequence of using a matnest and attaching a
nullspace to A11.  In that case you sort of "can" set a nullspace on
the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because
you just get a reference.  But if you switched to AIJ then you would
no longer get this.

So it happens that the nullspace you set on A11 /is/ transferred over
to S, but this is luck, rather than design.

So maybe there is something else wrong.  Perhaps you can run with
-fieldsplit_1_ksp_test_null_space to check the nullspace matches
correctly?

Lawrence



Re: [petsc-users] left and right preconditioning with a constant null space

2017-03-23 Thread Klaij, Christiaan
Lawrence,

Yes, that's clearer, thanks! I do have is0 and is1 so I can try
PetscObjectCompose and let you know.

Note though that the viewer reports that both S and A11 have a
null space attached... My matrix is a matnest and I've attached a
null space to A11, so the latter works as expected. But is the viewer
wrong for S?

Chris



dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Project-Manager-Veiligheids-en-Verkeersstudies-en-Specialist-Human-Performance.htm


From: Lawrence Mitchell <lawrence.mitch...@imperial.ac.uk>
Sent: Thursday, March 23, 2017 11:57 AM
To: Klaij, Christiaan; Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

On 23/03/17 08:42, Klaij, Christiaan wrote:
> Matt, Lawrence
>
>
> The same problem happens when using gmres with rtol 1e-6 in the
> schur complement (attachment "left_schur"). I'm not sure what
> this tells us. If I understand Lawrence correctly, the null space
> may be attached to the wrong matrix (A11 instead of Sp)?

I think I misread the code.

Because you can only attach nullspaces to either Amat or Pmat, you
can't control the nullspace for (say) Amat[1,1] or Pmat[1,1] because
MatCreateSubMatrix doesn't know anything about nullspaces.

So the steps inside pcfieldsplit are:

createsubmatrices(Amat) -> A, B, C, D

setup schur matrix S <= D - C A^{-1} B

Transfer nullspaces onto S.

How to transfer the nullspaces?  Well, as mentioned, I can't put
anything on the submatrices (because I have no way of accessing them).
 So instead, I need to hang the nullspace on the IS that defines the S
block:

So if you have:

is0, is1

You do:

PetscObjectCompose((PetscObject)is1, "nullspace", nullspace);

Before going into the preconditioner.

If you're doing this through a DM, then DMCreateSubDM controls the
transfer of nullspaces, the default implementation DTRT in the case of
sections.  See DMCreateSubDM_Section_Private.

Clearer?

Lawrence



Re: [petsc-users] left and right preconditioning with a constant null space

2017-03-22 Thread Klaij, Christiaan
Thanks Matt, I will try your suggestion and let you know. In the
meantime this is what I did to set the constant null space:

call 
MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL_OBJECT,nullsp,ierr); 
CHKERRQ(ierr)
call MatSetNullSpace(aa_sub(4),nullsp,ierr); CHKERRQ(ierr)
call MatNullSpaceDestroy(nullsp,ierr); CHKERRQ(ierr)

where aa_sub(4) corresponds to A11. This is called before
begin/end mat assembly.

Chris


dr. ir. Christiaan Klaij | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Software seminar in Shanghai for the first time, March 
28<http://www.marin.nl/web/News/News-items/Software-seminar-in-Shanghai-for-the-first-time-March-28.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Wednesday, March 22, 2017 4:47 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] left and right preconditioning with a constant null 
space

On Wed, Mar 22, 2017 at 2:58 PM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
I'm solving the Navier-Stokes equations using PCFieldSplit type
Schur and Selfp. This particular case has only Neumann conditions
for the pressure field.

With left preconditioning and no nullspace, I see that the KSP
solver for S does not converge (attachment "left_nonullsp") in
either norm.

When I attach the constant null space to A11, it gets passed on
to S and the KSP solver for S does converge in the preconditioned
norm only (attachment "left").

However, right preconditioning uses the unpreconditioned norm and
therefore doesn't converge (attachment "right"), regardless of
whether the nullspace is attached or not. Should I conclude that
right preconditioning cannot be used in combination with a null
space?

No, neither of your solves is working. The left preconditioned version is just 
hiding that fact.

You should start by checking that the exact solve works. Namely full Schur 
factorization
with exact solves for A and S. Since you do not have a matrix for S (unless you 
tell it do
use "full"), just use a very low (1e-10) tolerance. My guess is that something 
is off with
your null space specification.

  Thanks,

 Matt

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044> | 
mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/H2020-marinergi-project-launched.htm




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener




[petsc-users] left and right preconditioning with a constant null space

2017-03-22 Thread Klaij, Christiaan
I'm solving the Navier-Stokes equations using PCFieldSplit type
Schur and Selfp. This particular case has only Neumann conditions
for the pressure field.

With left preconditioning and no nullspace, I see that the KSP
solver for S does not converge (attachment "left_nonullsp") in
either norm.

When I attach the constant null space to A11, it gets passed on
to S and the KSP solver for S does converge in the preconditioned
norm only (attachment "left").

However, right preconditioning uses the unpreconditioned norm and
therefore doesn't converge (attachment "right"), regardless of
whether the nullspace is attached or not. Should I conclude that
right preconditioning cannot be used in combination with a null
space?

Chris


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/H2020-marinergi-project-launched.htm



left_nonullsp
Description: left_nonullsp


left
Description: left


right
Description: right


Re: [petsc-users] CG with right preconditioning supports NONE norm type only

2017-03-08 Thread Klaij, Christiaan
Barry,

I came across the same problem and decided to use KSPSetNormType
instead of KSPSetPCSide.  Do I understand correctly that CG with
KSP_NORM_UNPRECONDITIONED would be as efficient as with
KSP_NORM_PRECONDITIONED? Since PC_RIGHT is not supported, I was
under the impression that the former would basically be the
latter with an additional true residual evaluation for the
convergence monitor, which would be less efficient.

Chris

> On Mar 8, 2017, at 10:47 AM, Kong, Fande  wrote:
>
> Thanks Barry,
>
> We are using "KSPSetPCSide(ksp, pcside)" in the code.  I just tried 
> "-ksp_pc_side right", and petsc did not error out.
>
> I like to understand why CG does not work with right preconditioning? 
> Mathematically, the right preconditioning does not make sense?

   No, mathematically it makes sense to do it on the right. It is just that the 
PETSc code was never written to support it on the right. One reason is that CG 
is interesting that you can run with the true residual or the preconditioned 
residual with left preconditioning, hence less incentive to ever bother writing 
it to support right preconditioning. For completeness we should support right 
as well as symmetric.

  Barry


dr. ir. Christiaan Klaij  | Senior Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Your-future-is-Blue-Blueweek-April-1013.htm



Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-18 Thread Klaij, Christiaan
Thanks Lawrence, that nicely explains the unexpected behaviour!

I guess in general there ought to be getters for the four
ksp(A00)'s that occur in the full factorization.

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm


From: Lawrence Mitchell <lawrence.mitch...@imperial.ac.uk>
Sent: Wednesday, January 18, 2017 10:59 AM
To: petsc-users@mcs.anl.gov
Cc: bsm...@mcs.anl.gov; Klaij, Christiaan
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

On 18/01/17 08:40, Klaij, Christiaan wrote:
> Barry,
>
> I've managed to replicate the problem with 3.7.4
> snes/examples/tutorials/ex70.c. Basically I've added
> KSPGetTotalIterations to main (file is attached):

PCFieldSplitGetSubKSP returns, in the Schur case:

MatSchurComplementGet(pc->schur, );

in subksp[0]

and

pc->schur in subksp[1]

In your case, subksp[0] is the (preonly) approximation to A^{-1} *inside*

S = D - C A_inner^{-1} B

And subksp[1] is the approximation to S^{-1}.

Since each application of S to a vector (required in S^{-1}) requires
one application of A^{-1}, because you use 225 iterations in total to
invert S, you also use 225 applications of the KSP on A_inner.

There doesn't appear to be a way to get the KSP used for A^{-1} if
you've asked for different approximations to A^{-1} in the 0,0 block
and inside S.

Cheers,

Lawrence

> $ diff -u ex70.c.bak ex70.c
> --- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
> +++ ex70.c2017-01-18 09:03:40.904483434 +0100
> @@ -669,6 +669,10 @@
>KSPksp;
>PetscErrorCode ierr;
>
> +  KSP*subksp;
> +  PC pc;
> +  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
> +
>ierr = PetscInitialize(, , NULL, help);CHKERRQ(ierr);
>s.nx = 4;
>s.ny = 6;
> @@ -690,6 +694,13 @@
>ierr = StokesSetupPC(, ksp);CHKERRQ(ierr);
>ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);
>
> +  ierr = KSPGetPC(ksp, );CHKERRQ(ierr);
> +  ierr = PCFieldSplitGetSubKSP(pc,,); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[0],_vv); CHKERRQ(ierr);
> +  ierr = KSPGetTotalIterations(subksp[1],_pp); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", 
> nusediter_vv); CHKERRQ(ierr);
> +  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", 
> nusediter_pp); CHKERRQ(ierr);
> +
>/* don't trust, verify! */
>ierr = StokesCalcResidual();CHKERRQ(ierr);
>ierr = StokesCalcError();CHKERRQ(ierr);
>
> Now run as follows:
>
> $ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type 
> schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres 
> -fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi 
> -fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi 
> -fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi 
> -fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 17
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 18
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 20
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 21
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 23
>   Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
>   Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>   Linear fieldsplit_0_ solve converged d

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-18 Thread Klaij, Christiaan
Barry,

I've managed to replicate the problem with 3.7.4
snes/examples/tutorials/ex70.c. Basically I've added
KSPGetTotalIterations to main (file is attached):

$ diff -u ex70.c.bak ex70.c
--- ex70.c.bak2017-01-18 09:25:46.286174830 +0100
+++ ex70.c2017-01-18 09:03:40.904483434 +0100
@@ -669,6 +669,10 @@
   KSPksp;
   PetscErrorCode ierr;

+  KSP*subksp;
+  PC pc;
+  PetscInt   numsplit = 1, nusediter_vv, nusediter_pp;
+
   ierr = PetscInitialize(, , NULL, help);CHKERRQ(ierr);
   s.nx = 4;
   s.ny = 6;
@@ -690,6 +694,13 @@
   ierr = StokesSetupPC(, ksp);CHKERRQ(ierr);
   ierr = KSPSolve(ksp, s.b, s.x);CHKERRQ(ierr);

+  ierr = KSPGetPC(ksp, );CHKERRQ(ierr);
+  ierr = PCFieldSplitGetSubKSP(pc,,); CHKERRQ(ierr);
+  ierr = KSPGetTotalIterations(subksp[0],_vv); CHKERRQ(ierr);
+  ierr = KSPGetTotalIterations(subksp[1],_pp); CHKERRQ(ierr);
+  ierr = PetscPrintf(PETSC_COMM_WORLD," total u solves = %i\n", nusediter_vv); 
CHKERRQ(ierr);
+  ierr = PetscPrintf(PETSC_COMM_WORLD," total p solves = %i\n", nusediter_pp); 
CHKERRQ(ierr);
+
   /* don't trust, verify! */
   ierr = StokesCalcResidual();CHKERRQ(ierr);
   ierr = StokesCalcError();CHKERRQ(ierr);

Now run as follows:

$ mpirun -n 2 ./ex70 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type 
schur -pc_fieldsplit_schur_fact_type lower -fieldsplit_0_ksp_type gmres 
-fieldsplit_0_pc_type bjacobi -fieldsplit_1_pc_type jacobi 
-fieldsplit_1_inner_ksp_type preonly -fieldsplit_1_inner_pc_type jacobi 
-fieldsplit_1_upper_ksp_type preonly -fieldsplit_1_upper_pc_type jacobi 
-fieldsplit_0_ksp_converged_reason -fieldsplit_1_ksp_converged_reason
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 14
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 16
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 17
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 18
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 20
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 21
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 23
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
  Linear fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 5
  Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
 total u solves = 225
 total p solves = 225
 residual u = 9.67257e-06
 residual p = 5.42082e-07
 residual [u,p] = 9.68775e-06
 discretization error u = 0.0106464
 discretization error p = 1.85907
 discretization error [u,p] = 1.8591

So here again the total of 225 is correct for p, but for u it
should be 60. Hope this helps you find the problem.

Chris



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Few-places-left-for-Offshore-and-Ship-hydrodynamics-courses.htm

________
From: Klaij, Christiaan
Sent: Tuesday, January 17, 2017 8:45 AM
To: Barry Smith
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

Well, that's it, all the rest was hard coded. Here's the relevant part of the 
code:

   CALL PCSetType(pc_system,PCFIELDSPLIT,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetType(pc_system,PC_COMPOSITE_SCHUR,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"0",isgs(1),ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"1",isgs(2),ierr); CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurFactType(pc_system,PC_FIELDSPLIT_SCHUR_FACT_FULL,ierr);CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurPre(pc_system,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr)

  CALL 
KSPSetTolerances(ksp_system,tol,PETSC_DEFAULT_REAL,PETSC_DEFAU

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-16 Thread Klaij, Christiaan
Well, that's it, all the rest was hard coded. Here's the relevant part of the 
code:

   CALL PCSetType(pc_system,PCFIELDSPLIT,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetType(pc_system,PC_COMPOSITE_SCHUR,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"0",isgs(1),ierr); CHKERRQ(ierr)
  CALL PCFieldSplitSetIS(pc_system,"1",isgs(2),ierr); CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurFactType(pc_system,PC_FIELDSPLIT_SCHUR_FACT_FULL,ierr);CHKERRQ(ierr)
  CALL 
PCFieldSplitSetSchurPre(pc_system,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr)

  CALL 
KSPSetTolerances(ksp_system,tol,PETSC_DEFAULT_REAL,PETSC_DEFAULT_REAL,maxiter,ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_rtol","0.01",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_rtol","0.01",ierr);
 CHKERRQ(ierr)

  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_pc_side","right",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_ksp_pc_side","right",ierr);
 CHKERRQ(ierr)

  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_0_ksp_type","gmres",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_ksp_type","preonly",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_upper_pc_type","jacobi",ierr);
 CHKERRQ(ierr)

  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_ksp_type","preonly",ierr);
 CHKERRQ(ierr)
  CALL 
PetscOptionsSetValue(PETSC_NULL_OBJECT,"-sys_fieldsplit_1_inner_pc_type","jacobi",ierr);
 CHKERRQ(ierr)



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Verification-and-validation-exercises-for-flow-around-KVLCC2-tanker.htm


From: Barry Smith <bsm...@mcs.anl.gov>
Sent: Monday, January 16, 2017 9:28 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

   Please send all the command line options you use.


> On Jan 16, 2017, at 1:47 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>
> Barry,
>
> Sure, here's the output with:
>
> -sys_ksp_view -sys_ksp_converged_reason 
> -sys_fieldsplit_0_ksp_converged_reason -sys_fieldsplit_1_ksp_converged_reason
>
> (In my previous email, I rearranged 0 & 1 for easy summing.)
>
> Chris
>
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
>  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
>  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
> Linear sys_ solve converged due to CONVERGED_RTOL iterations 6
> KSP Object:(sys_) 1 MPI processes
>  type: fgmres
>GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
> Orthogonalization with no iterative refinement
>GMRES: happy breakdown tolerance 1e-30
>  maximum iterations=300, initial guess is zero
>  tolerances:  relative=0.01, absolute=1e-50, divergence=1.
>  right preconditioning
>  using UNPRECONDITIONED norm type for convergence test
> PC Object:(sys_) 1 MPI processes
>  type: fieldsplit
>FieldSplit with Schur preconditioner, factorization FULL
>Preconditioner for the Schur complement formed from Sp, an as

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-15 Thread Klaij, Christiaan
ment
  GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances:  relative=0.01, absolute=1e-50, divergence=1.
right preconditioning
using UNPRECONDITIONED norm type for convergence test
  PC Object:  (sys_fieldsplit_1_)   1 MPI processes
type: ilu
  ILU: out-of-place factorization
  0 levels of fill
  tolerance for zero pivot 2.22045e-14
  matrix ordering: natural
  factor fill ratio given 1., needed 1.
Factored matrix follows:
  Mat Object:   1 MPI processes
type: seqaij
rows=3200, cols=3200
package used to perform factorization: petsc
total: nonzeros=40404, allocated nonzeros=40404
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
linear system matrix followed by preconditioner matrix:
Mat Object:(sys_fieldsplit_1_) 1 MPI processes
  type: schurcomplement
  rows=3200, cols=3200
Schur complement A11 - A10 inv(A00) A01
A11
  Mat Object:  (sys_fieldsplit_1_)   1 MPI 
processes
type: seqaij
rows=3200, cols=3200
total: nonzeros=40404, allocated nonzeros=40404
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
A10
  Mat Object:   1 MPI processes
type: seqaij
rows=3200, cols=9600
total: nonzeros=47280, allocated nonzeros=47280
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
KSP of A00
  KSP Object:  (sys_fieldsplit_1_inner_)   
1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
left preconditioning
using NONE norm type for convergence test
  PC Object:  (sys_fieldsplit_1_inner_)   1 
MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object:(sys_fieldsplit_0_) 
1 MPI processes
  type: seqaij
  rows=9600, cols=9600
  total: nonzeros=47280, allocated nonzeros=47280
  total number of mallocs used during MatSetValues calls =0
not using I-node routines
A01
  Mat Object:   1 MPI processes
type: seqaij
rows=9600, cols=3200
total: nonzeros=47280, allocated nonzeros=47280
total number of mallocs used during MatSetValues calls =0
  not using I-node routines
Mat Object: 1 MPI processes
  type: seqaij
  rows=3200, cols=3200
  total: nonzeros=40404, allocated nonzeros=40404
  total number of mallocs used during MatSetValues calls =0
not using I-node routines
  linear system matrix followed by preconditioner matrix:
  Mat Object:   1 MPI processes
type: nest
rows=12800, cols=12800
  Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="mom_", type=seqaij, rows=9600, cols=9600
(0,1) : prefix="grad_", type=seqaij, rows=9600, cols=3200
(1,0) : prefix="div_", type=seqaij, rows=3200, cols=9600
(1,1) : prefix="stab_", type=seqaij, rows=3200, cols=3200
  Mat Object:   1 MPI processes
type: nest
rows=12800, cols=12800
  Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="sys_fieldsplit_0_", type=seqaij, rows=9600, cols=9600
(0,1) : type=seqaij, rows=9600, cols=3200
(1,0) : type=seqaij, rows=3200, cols=9600
(1,1) : prefix="sys_fieldsplit_1_", type=seqaij, rows=3200, cols=3200
 nusediter_vv  37
 nusediter_pp  37



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/The-Ocean-Cleanup-testing-continues.htm


From: Barry Smith <bsm...@mcs.anl.gov>
Sent: Friday, January 13, 2017 7:51 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

   Yes, I would have expected this to work. Could you send the output from 
-ksp_view in this case?


> On Jan 13, 2017, at 

Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2017-01-13 Thread Klaij, Christiaan
Barry,

It's been a while but I'm finally using this function in
3.7.4. Is it supposed to work with fieldsplit? Here's why.

I'm solving a Navier-Stokes system with fieldsplit (pc has one
velocity solve and one pressure solve) and trying to retrieve the
totals like this:

  CALL KSPSolve(ksp_system,rr_system,xx_system,ierr); CHKERRQ(ierr)
  CALL PCFieldSplitGetSubKSP(pc_system,numsplit,subksp,ierr); CHKERRQ(ierr)
  CALL KSPGetTotalIterations(subksp(1),nusediter_vv,ierr); CHKERRQ(ierr)
  CALL KSPGetTotalIterations(subksp(2),nusediter_pp,ierr); CHKERRQ(ierr)
print *, 'nusediter_vv', nusediter_vv
print *, 'nusediter_pp', nusediter_pp

Running the code shows this surprise:

  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 1
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 7
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8
  Linear sys_fieldsplit_0_ solve converged due to CONVERGED_RTOL iterations 8

  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 22
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 6
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 3
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2
  Linear sys_fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations 2

 nusediter_vv  37
 nusediter_pp  37

So the value of nusediter_pp is indeed 37, but for nusediter_vv
it should be 66. Any idea what went wrong?

Chris



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/MARIN-wishes-you-a-challenging-inspiring-2017.htm


From: Barry Smith <bsm...@mcs.anl.gov>
Sent: Saturday, April 11, 2015 12:27 AM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

  Chris,

  I have added KSPGetTotalIterations() to the branch 
barry/add-ksp-total-iterations/master and next. After tests it will go into 
master

  Barry

> On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>
> Barry,
>
> Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
> ksp and then KSPGetIterationNumber, but what does this number
> mean?
>
> It appears to be the number of iterations of the last time that
> the subsystem was solved, right? If so, this corresponds to the
> last iteration of the coupled system, how about all the previous
> iterations?
>
> Chris
> 
> From: Barry Smith <bsm...@mcs.anl.gov>
> Sent: Friday, April 10, 2015 2:48 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1
>
>   Chris,
>
> It appears you should call PCFieldSplitGetSubKSP() and then get the 
> information you want out of the individual KSPs. If this doesn't work please 
> let us know.
>
>   Barry
>
>> On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>>
>> A question when using PCFieldSplit: for each linear iteration of
>> the system, how many iterations for fielsplit 0 and 1?
>>
>> One way to find out is to run with -ksp_monitor,
>> -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
>> gives the complete convergence history.
>>
>> Another way, suggested by Matt, is to use -ksp_monitor,
>> -fieldsplit_0_ksp_converged_reason and
>> -fieldsplit_1_ksp_converged_reason. This gives only the totals
>> for fieldsplit 0 and 1 (but without saying for which one).
>>
>> Both ways require to somehow process the output, which is a bit
>> inconvenient. Could KSPGetResidualHistory perhaps return (some)
>> information on the subsystems' convergence for processing inside
>> the code?
>>
>> Chris
>>
>>
>> dr. ir. Christiaan Klaij
>> CFD Researcher
>> Research & Development
>> E mailto:c.kl...@marin.nl
>> T +31 317 49 33 44
>>
>>
>> MARIN
>> 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
>> T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
>>
>



Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-06 Thread Klaij, Christiaan
Satish,

Our sysadmin is not keen on downgrading glibc. I'll stick with 
"--with-shared-libraries=0" for now and wait for SL7.3 with intel 17. Thanks 
for filing the bugreport at RHEL, very curious to see their response.

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Comparison-of-uRANS-and-BEMBEM-for-propeller-pressure-pulse-prediction.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Thursday, January 05, 2017 7:02 PM
To: Matthew Knepley
Cc: Klaij, Christiaan; petsc-users
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Thu, 5 Jan 2017, Matthew Knepley wrote:

> On Thu, Jan 5, 2017 at 2:37 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:

> > So problem solved for now, thanks to you and Matt for all your
> > help! On the long run I will go for Intel-17 on SL7.3.
> >
> > What worries me though is that a simple update (which happens all
> > the time according to sysadmin) can have such a dramatic effect.
> >
> I agree. It seems SL has broken the ability to use shared libraries with a
> simple point release.
> It seems the robustness of all this process is a myth.

Well its more of RHEL - than SL. And its just Intel .so files [as far
as we know] thats triggering this issue.

RHEL generally doesn't make changes that break old binaries. But any
code change [wihch bug fixes are] - can introduce changed behavior
with some stuff.. [esp stuff that might use internal - non-api
features]

RHEL7 glibc had huge number of fixes since 2.17-106.el7_2.8
https://rpmfind.net/linux/RPM/centos/updates/7.3.1611/x86_64/Packages/glibc-2.17-157.el7_3.1.x86_64.html

Interestingly the code crashes at dynamic linking time? [ even before
main() starts] - perhaps something to do with the way libintlc.so.5
uses memmove?

(gdb) where
#0  0x7722865e in ?? ()
#1  0x77de9675 in elf_machine_rela (reloc=0x7592ae38, 
reloc=0x7592ae38, skip_ifunc=, 
reloc_addr_arg=0x75b8e8f0, version=0x0, sym=0x75925f58, 
map=0x77fee570)
at ../sysdeps/x86_64/dl-machine.h:288
#2  elf_dynamic_do_Rela (skip_ifunc=, lazy=, 
nrelative=, relsize=, reladdr=, 
map=0x77fee570) at do-rel.h:170
#3  _dl_relocate_object (scope=, reloc_mode=, 
consider_profiling=, consider_profiling@entry=0) at 
dl-reloc.c:259
#4  0x77de0792 in dl_main (phdr=, phdr@entry=0x400040, 
phnum=, phnum@entry=9, 
user_entry=user_entry@entry=0x7fffceb8, auxv=) at rtld.c:2192
#5  0x77df3e36 in _dl_sysdep_start 
(start_argptr=start_argptr@entry=0x7fffcf70, 
dl_main=dl_main@entry=0x77dde820 ) at ../elf/dl-sysdep.c:244
#6  0x77de1a31 in _dl_start_final (arg=0x7fffcf70) at rtld.c:318
#7  _dl_start (arg=0x7fffcf70) at rtld.c:544
#8  0x77dde1e8 in _start () from /lib64/ld-linux-x86-64.so.2
#9  0x0001 in ?? ()
#10 0x7fffd25c in ?? ()
#11 0x in ?? ()
(gdb)

[balay@localhost benchmarks]$ LD_DEBUG=all ./a.out

  2468: symbol=__xpg_basename;  lookup in file=./a.out [0]
  2468: symbol=__xpg_basename;  lookup in 
file=/soft/com/packages/intel/16/u3/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64/libifcore.so.5
 [0]
  2468: symbol=__xpg_basename;  lookup in file=/lib64/libm.so.6 [0]
  2468: symbol=__xpg_basename;  lookup in file=/lib64/libgcc_s.so.1 [0]
  2468: symbol=__xpg_basename;  lookup in file=/lib64/libc.so.6 [0]
  2468: binding file 
/soft/com/packages/intel/16/u3/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64/libintlc.so.5
 [0] to /lib64/libc.so.6 [0]: normal symbol `__xpg_basename'
  2468: symbol=memmove;  lookup in file=./a.out [0]
  2468: symbol=memmove;  lookup in 
file=/soft/com/packages/intel/16/u3/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64/libifcore.so.5
 [0]
  2468: symbol=memmove;  lookup in file=/lib64/libm.so.6 [0]
  2468: symbol=memmove;  lookup in file=/lib64/libgcc_s.so.1 [0]
  2468: symbol=memmove;  lookup in file=/lib64/libc.so.6 [0]
  2468: binding file 
/soft/com/packages/intel/16/u3/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64/libintlc.so.5
 [0] to /lib64/libc.so.6 [0]: normal symbol `memmove'
Segmentation fault (core dumped)



If intel-16 compilers are critical - one can always downgrade to old
glibc - but then would miss out on all the fixes..

yum downgrade glibc*

Satish


Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-05 Thread Klaij, Christiaan
Satish, Matt

Our sysadmin tells me Scientific Linux is still busy with the
RedHat 7.3 update, so yes, this is a partial update somewhere
between 7.2 and 7.3...

No luck with the quotes on my system, but the option
--with-shared-libraries=0 does work! make test gives:

Running test examples to verify correct installation
Using 
PETSC_DIR=/projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/install/petsc-3.7.4
 and PETSC_ARCH=
C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI 
process
C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI 
processes
Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI 
process
Completed test examples

I've also tested the standalone program of metis, that works too:

$ ./gpmetis
Missing parameters.
   Usage: gpmetis [options]  
  use 'gpmetis -help' for a summary of the options.

So problem solved for now, thanks to you and Matt for all your
help! On the long run I will go for Intel-17 on SL7.3.

What worries me though is that a simple update (which happens all
the time according to sysadmin) can have such a dramatic effect.

Thanks again,
Chris



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Comparison-of-uRANS-and-BEMBEM-for-propeller-pressure-pulse-prediction.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Wednesday, January 04, 2017 7:24 PM
To: petsc-users
Cc: Klaij, Christiaan
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Wed, 4 Jan 2017, Satish Balay wrote:

> So I guess your best bet is static libraries..

Or upgrade to intel-17 compilers.

Satish

---

[balay@el7 benchmarks]$ icc --version
icc (ICC) 17.0.1 20161005
Copyright (C) 1985-2016 Intel Corporation.  All rights reserved.

[balay@el7 benchmarks]$ icc sizeof.c -lifcore
[balay@el7 benchmarks]$ ldd a.out
linux-vdso.so.1 =>  (0x7ffed4b02000)
libifcore.so.5 => 
/soft/com/packages/intel/17/u1/compilers_and_libraries_2017.1.132/linux/compiler/lib/intel64/libifcore.so.5
 (0x7f5a6543)
libm.so.6 => /lib64/libm.so.6 (0x7f5a65124000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f5a64f0e000)
libc.so.6 => /lib64/libc.so.6 (0x7f5a64b4c000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f5a64948000)
libimf.so => 
/soft/com/packages/intel/17/u1/compilers_and_libraries_2017.1.132/linux/compiler/lib/intel64/libimf.so
 (0x7f5a6445c000)
libsvml.so => 
/soft/com/packages/intel/17/u1/compilers_and_libraries_2017.1.132/linux/compiler/lib/intel64/libsvml.so
 (0x7f5a6355)
libintlc.so.5 => 
/soft/com/packages/intel/17/u1/compilers_and_libraries_2017.1.132/linux/compiler/lib/intel64/libintlc.so.5
 (0x7f5a632e6000)
/lib64/ld-linux-x86-64.so.2 (0x7f5a65793000)
[balay@el7 benchmarks]$ ./a.out
long double : 16
double  : 8
int : 4
char: 1
short   : 2
long: 8
long long   : 8
int *   : 8
size_t  : 8
[balay@el7 benchmarks]$


Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-04 Thread Klaij, Christiaan
By the way, petsc did compile and install metis and parmetis succesfully before 
the make error. However, running the newly compiled gpmetis program gives the 
same segmentation fault! So the original problem was not solved by recompiling, 
unfortunately.


Chris



From: Klaij, Christiaan
Sent: Wednesday, January 04, 2017 3:53 PM
To: Matthew Knepley
Cc: petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157


So how would I do that? Does LIBS= accept spaces in the string? 
Something like this perhaps:


LIBS="-L/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin
 -lifcore"


But I'm starting to believe that my intel install is somehow broken. I'm 
getting these intel compilers from rpm's provided by our cluster vendor. On a 
workstation I can try yum remove and install of the intel packages. Not so easy 
on a production cluster. Is this worth a try? Or will it just copy/paste the 
same broken (?) stuff in the same place?


Chris



From: Matthew Knepley <knep...@gmail.com>
Sent: Wednesday, January 04, 2017 3:13 PM
To: Klaij, Christiaan
Cc: petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Wed, Jan 4, 2017 at 7:37 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:

I've tried with:


 
--LIBS=/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a
 -lstdc++\\

This is likely connected to the problem below, but I would have to see the log.

but that doesn't seem to make a difference.


With the option --with-cxx=0 the configure part does work(!), but then I get


**ERROR*
  Error during compile, check Linux-x86_64-Intel/lib/petsc/conf/make.log
  Send it and Linux-x86_64-Intel/lib/petsc/conf/configure.log to 
petsc-ma...@mcs.anl.gov<mailto:petsc-ma...@mcs.anl.gov>
***

Here is the problem:

 CLINKER 
/projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.4/Linux-x86_64-Intel/lib/libpetsc.so.3.7.4
ld: 
/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a(for_init.o):
 relocation R_X86_64_32 against `.rodata.str1.4' can not be used when making a 
shared object; recompile with -fPIC
/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a:
 could not read symbols: Bad value

Clearly there is something wrong with the compiler install.

However, can you put a libifcore.so in LIBS instead?

   Matt

See the attached log files.


Chris


dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:+31%20317%20493%20344> | 
c.kl...@marin.nl<mailto:c.kl...@marin.nl> | www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: MARIN Report 119: highlighting naval research 
projects<http://www.marin.nl/web/News/News-items/MARIN-Report-119-highlighting-naval-research-projects.htm>


From: Matthew Knepley <knep...@gmail.com<mailto:knep...@gmail.com>>
Sent: Wednesday, January 04, 2017 1:43 PM
To: Klaij, Christiaan
Cc: petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Wed, Jan 4, 2017 at 4:32 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Satish,

I tried your suggestion:

--with-clib-autodetect=0 --with-fortranlib-autodetect=0 
--with-cxxlib-autodetect=0 LIBS=LIBS=/path_to/libifcore.a

I guess I don't really need "LIBS= " twice (?) so I've used this line:

LIBS=/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a

Unfortunately, this approach also fails (attached log):

Ah, this error is much easier:

Executing: mpif90  -o /tmp/petsc-3GfeyZ/config.compilers/conftest-fPIC -g 
-O3  /tmp/petsc-3GfeyZ/config.compilers/conftest.o 
/tmp/petsc-3GfeyZ/config.compilers/cxxobj.o  
/tmp/petsc-3GfeyZ/config.compilers/confc.o   -ldl 
/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a
Possible ERROR while running linker: exit code 256
stderr:
/tmp/petsc-3GfeyZ/config.compilers/cxxobj.o:(.gnu.linkonce.d.DW.ref.__gxx_personality_v0+0x0):
 undefined reference to `__gxx_personality_v0'

Intel as lazy writing its C++ compiler, so it uses some of g++. If you want to 
use C++, you will need to add -lstdc++ to your LIBS variable (I think).
Otherwise, please turn it off using --with-cxx=0.

  Thanks,

 Matt

***

Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-04 Thread Klaij, Christiaan
So how would I do that? Does LIBS= accept spaces in the string? 
Something like this perhaps:


LIBS="-L/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin
 -lifcore"


But I'm starting to believe that my intel install is somehow broken. I'm 
getting these intel compilers from rpm's provided by our cluster vendor. On a 
workstation I can try yum remove and install of the intel packages. Not so easy 
on a production cluster. Is this worth a try? Or will it just copy/paste the 
same broken (?) stuff in the same place?


Chris


dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Comparison of uRANS and BEM-BEM for propeller pressure pulse 
prediction<http://www.marin.nl/web/News/News-items/Comparison-of-uRANS-and-BEMBEM-for-propeller-pressure-pulse-prediction.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Wednesday, January 04, 2017 3:13 PM
To: Klaij, Christiaan
Cc: petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Wed, Jan 4, 2017 at 7:37 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:

I've tried with:


 
--LIBS=/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a
 -lstdc++\\

This is likely connected to the problem below, but I would have to see the log.

but that doesn't seem to make a difference.


With the option --with-cxx=0 the configure part does work(!), but then I get


**ERROR*
  Error during compile, check Linux-x86_64-Intel/lib/petsc/conf/make.log
  Send it and Linux-x86_64-Intel/lib/petsc/conf/configure.log to 
petsc-ma...@mcs.anl.gov<mailto:petsc-ma...@mcs.anl.gov>
***

Here is the problem:

 CLINKER 
/projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.7.4/Linux-x86_64-Intel/lib/libpetsc.so.3.7.4
ld: 
/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a(for_init.o):
 relocation R_X86_64_32 against `.rodata.str1.4' can not be used when making a 
shared object; recompile with -fPIC
/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a:
 could not read symbols: Bad value

Clearly there is something wrong with the compiler install.

However, can you put a libifcore.so in LIBS instead?

   Matt

See the attached log files.


Chris


dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:+31%20317%20493%20344> | 
c.kl...@marin.nl<mailto:c.kl...@marin.nl> | www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: MARIN Report 119: highlighting naval research 
projects<http://www.marin.nl/web/News/News-items/MARIN-Report-119-highlighting-naval-research-projects.htm>

________
From: Matthew Knepley <knep...@gmail.com<mailto:knep...@gmail.com>>
Sent: Wednesday, January 04, 2017 1:43 PM
To: Klaij, Christiaan
Cc: petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Wed, Jan 4, 2017 at 4:32 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Satish,

I tried your suggestion:

--with-clib-autodetect=0 --with-fortranlib-autodetect=0 
--with-cxxlib-autodetect=0 LIBS=LIBS=/path_to/libifcore.a

I guess I don't really need "LIBS= " twice (?) so I've used this line:

LIBS=/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a

Unfortunately, this approach also fails (attached log):

Ah, this error is much easier:

Executing: mpif90  -o /tmp/petsc-3GfeyZ/config.compilers/conftest-fPIC -g 
-O3  /tmp/petsc-3GfeyZ/config.compilers/conftest.o 
/tmp/petsc-3GfeyZ/config.compilers/cxxobj.o  
/tmp/petsc-3GfeyZ/config.compilers/confc.o   -ldl 
/cm/shared/apps/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64_lin/libifcore.a
Possible ERROR while running linker: exit code 256
stderr:
/tmp/petsc-3GfeyZ/config.compilers/cxxobj.o:(.gnu.linkonce.d.DW.ref.__gxx_personality_v0+0x0):
 undefined reference to `__gxx_personality_v0'

Intel as lazy writing its C++ compiler, so it uses some o

Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-04 Thread Klaij, Christiaan
Our sysadmin says that SL7 does not provide the debug version of glibc.


Chris

dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Modelling natural transition on hydrofoils for application in 
underwater 
gliders<http://www.marin.nl/web/News/News-items/Modelling-natural-transition-on-hydrofoils-for-application-in-underwater-gliders-1.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Wednesday, January 04, 2017 1:40 PM
To: Klaij, Christiaan
Cc: petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Wed, Jan 4, 2017 at 3:16 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Well, a bit clearer perhaps. It seems the relevant ERROR is on
line 31039. So I did this case by hand using the compile and link
lines from the log, then run it in gdb:

$ pwd
/tmp/petsc-Q0URwQ/config.setCompilers
$ ls
confdefs.h  conffix.h  conftest  conftest.F  conftest.o
$ gdb
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
(gdb) file conftest
Reading symbols from /tmp/petsc-Q0URwQ/config.setCompilers/conftest...done.
(gdb) run
Starting program: /tmp/petsc-Q0URwQ/config.setCompilers/conftest

Program received signal SIGSEGV, Segmentation fault.
0x2e32f65e in ?? ()
Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7.x86_64
(gdb) bt
#0  0x2e32f65e in ?? ()
#1  0x2aab7675 in _dl_relocate_object ()
   from /lib64/ld-linux-x86-64.so.2
#2  0x2aaae792 in dl_main () from /lib64/ld-linux-x86-64.so.2
#3  0x2aac1e36 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2
#4  0x2aaafa31 in _dl_start () from /lib64/ld-linux-x86-64.so.2
#5  0x2aaac1e8 in _start () from /lib64/ld-linux-x86-64.so.2
#6  0x0001 in ?? ()
#7  0x7fffd4e2 in ?? ()
#8  0x in ?? ()
(gdb)

Does this make any sense to you?

No. It looks like there is something deeply wrong with the dynamic loader. You 
might  try

  debuginfo-install glibc-2.17-157.el7.x86_64

as it says so that we can see the stack trace. Considering that the error 
happens inside of

  _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2

I am guessing that it is indeed connected to your upgrade of glibc. Since it 
only happens when
you are not using compiler libraries, I think your compiler has pointers back 
to old things in the
OS. I would recommend either a) using GNU as Satish says, or b) reinstalling 
the whole compiler
suite.

I will look at the new problem when not using compiler libraries.

  Thanks,

Matt



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044> | 
mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Software-seminar-in-Shanghai-for-the-first-time-March-28.htm


From: Klaij, Christiaan
Sent: Wednesday, January 04, 2017 9:26 AM
To: Matthew Knepley; petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

So I've applied the patch to my current 3.7.4 source, the new
configure.log is attached. It's slightly larger but not much
clearer too me...

Chris

From: Satish Balay <ba...@mcs.anl.gov<mailto:ba...@mcs.anl.gov>>
Sent: Tuesday, January 03, 2017 5:00 PM
To: Matthew Knepley
Cc: Klaij, Christiaan; petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Tue, 3 Jan 2017, Matthew Knepley wrote:

> Or get the new tarball when it spins tonight, since Satish has just
> added the fix to maint.

We don't spin 'maint/patch-release' tarballs everynight. Its every 1-3
months - [partly depending upon the number of outstanding patches - or
their severity]

-rw-r--r-- 1 petsc pdetools 23194357 Jan  1 10:41 petsc-3.7.5.tar.gz
-rw-r--r-- 1 petsc pdetools 23189526 Oct  2 22:06 petsc-3.7.4.tar.gz
-rw-r--r-- 1 petsc pdetools 23172670 Jul 

Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-04 Thread Klaij, Christiaan
Well, a bit clearer perhaps. It seems the relevant ERROR is on
line 31039. So I did this case by hand using the compile and link
lines from the log, then run it in gdb:

$ pwd
/tmp/petsc-Q0URwQ/config.setCompilers
$ ls
confdefs.h  conffix.h  conftest  conftest.F  conftest.o
$ gdb
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
(gdb) file conftest
Reading symbols from /tmp/petsc-Q0URwQ/config.setCompilers/conftest...done.
(gdb) run
Starting program: /tmp/petsc-Q0URwQ/config.setCompilers/conftest

Program received signal SIGSEGV, Segmentation fault.
0x2e32f65e in ?? ()
Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7.x86_64
(gdb) bt
#0  0x2e32f65e in ?? ()
#1  0x2aab7675 in _dl_relocate_object ()
   from /lib64/ld-linux-x86-64.so.2
#2  0x2aaae792 in dl_main () from /lib64/ld-linux-x86-64.so.2
#3  0x2aac1e36 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2
#4  0x2aaafa31 in _dl_start () from /lib64/ld-linux-x86-64.so.2
#5  0x2aaac1e8 in _start () from /lib64/ld-linux-x86-64.so.2
#6  0x0001 in ?? ()
#7  0x7fffd4e2 in ?? ()
#8  0x in ?? ()
(gdb)

Does this make any sense to you?



dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Software-seminar-in-Shanghai-for-the-first-time-March-28.htm

________
From: Klaij, Christiaan
Sent: Wednesday, January 04, 2017 9:26 AM
To: Matthew Knepley; petsc-users; Satish Balay
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

So I've applied the patch to my current 3.7.4 source, the new
configure.log is attached. It's slightly larger but not much
clearer too me...

Chris

From: Satish Balay <ba...@mcs.anl.gov>
Sent: Tuesday, January 03, 2017 5:00 PM
To: Matthew Knepley
Cc: Klaij, Christiaan; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Tue, 3 Jan 2017, Matthew Knepley wrote:

> Or get the new tarball when it spins tonight, since Satish has just
> added the fix to maint.

We don't spin 'maint/patch-release' tarballs everynight. Its every 1-3
months - [partly depending upon the number of outstanding patches - or
their severity]

-rw-r--r-- 1 petsc pdetools 23194357 Jan  1 10:41 petsc-3.7.5.tar.gz
-rw-r--r-- 1 petsc pdetools 23189526 Oct  2 22:06 petsc-3.7.4.tar.gz
-rw-r--r-- 1 petsc pdetools 23172670 Jul 24 12:22 petsc-3.7.3.tar.gz
-rw-r--r-- 1 petsc pdetools 23111802 Jun  5  2016 petsc-3.7.2.tar.gz
-rw-r--r-- 1 petsc pdetools 23113397 May 15  2016 petsc-3.7.1.tar.gz
-rw-r--r-- 1 petsc pdetools 22083999 Apr 25  2016 petsc-3.7.0.tar.gz

Satish


Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-03 Thread Klaij, Christiaan
I've downloaded the tarball on October 24th:


$ ls -lh petsc-lite-3.7.4.tar.gz
-rw-r--r-- 1 cklaij domain users 8.4M Oct 24 11:07 petsc-lite-3.7.4.tar.gz


(no direct internet access on cluster)

dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Modelling natural transition on hydrofoils for application in 
underwater 
gliders<http://www.marin.nl/web/News/News-items/Modelling-natural-transition-on-hydrofoils-for-application-in-underwater-gliders-1.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Tuesday, January 03, 2017 4:36 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] problems after glibc upgrade to 2.17-157

On Tue, Jan 3, 2017 at 9:04 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:

I've been using petsc-3.7.4 with intel mpi and compilers,
superlu_dist, metis and parmetis on a cluster running
SL7. Everything was working fine until SL7 got an update where
glibc was upgraded from 2.17-106 to 2.17-157.

I cannot see the error in your log. We previously fixed a bug with this error 
reporting:

  
https://bitbucket.org/petsc/petsc/commits/32cc76960ddbb48660f8e7c667e293c0ccd0e7d7

in August. Is it possible that your PETSc is older than this? Could you apply 
that patch, or
run the configure with 'master'?

My guess is this is a dynamic library path problem, as it always is after 
upgrades.

  Thanks,

Matt

This update seemed to have broken (at least) parmetis: the
standalone binary gpmetis started to give a segmentation
fault. The core dump shows this:

Core was generated by `gpmetis'.
Program terminated with signal 11, Segmentation fault.
#0  0x2c6b865e in memmove () from /lib64/libc.so.6

That's when I decided to recompile, but to my surprise I cannot
even get past the configure stage (log attached)!

***
UNABLE to EXECUTE BINARIES for ./configure
---
Cannot run executables created with FC. If this machine uses a batch system
to submit jobs you will need to configure using ./configure with the additional 
option  --with-batch.
 Otherwise there is problem with the compilers. Can you compile and run code 
with your compiler 'mpif90'?
See http://www.mcs.anl.gov/petsc/documentation/faq.html#libimf
***

Note the following:

1) Configure was done with the exact same options that worked
fine before the update of SL7.

2) The intel mpi and compilers are exactly the same as before the
update of SL7.

3) The cluster does not require a batch system to run code.

4) I can compile and run code with mpif90 on this cluster.

5) The problem also occurs on a workstation running SL7.

Any clues on how to proceed?
Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44<tel:%2B31%20317%2049%2033%2044> | 
mailto:c.kl...@marin.nl<mailto:c.kl...@marin.nl> | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Comparison-of-uRANS-and-BEMBEM-for-propeller-pressure-pulse-prediction.htm




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener




Re: [petsc-users] --download-metis and build of stand-alone tools

2016-10-31 Thread Klaij, Christiaan
Jed,

Thanks, that line in the cmake file is exactly what I needed to
know. A petsc configure option would be nice to have, but it's
too difficult for me to do right now, I'll just hack the file
instead.

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/SSSRIMARIN-seminar-November-2-Shanghai.htm


From: Jed Brown <j...@jedbrown.org>
Sent: Monday, October 31, 2016 3:26 PM
To: Klaij, Christiaan; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] --download-metis and build of stand-alone tools

"Klaij, Christiaan" <c.kl...@marin.nl> writes:

> Satish,
>
> I've noticed that SuperLU depends on metis and parmetis and that
> PETSc downloads the versions 5.1.0-p3 and 4.0.3-p3. These are
> different from the Karypis latest stable versions (without the
> -p3). Do I really need these -p3 versions?

They fix some portability and correctness bugs.  Those packages are
mostly unmaintained by upstream and new releases often don't fix bugs
that have reproducible test cases and patches.  So you can use the
upstream version, but it might crash due to known bugs and good luck
getting support.

> If so, after configure, compilation and installation by petsc, it
> seems that the stand-alone programs such as gpmetis are not being
> build and installed. That's a problem for me. I don't mind
> switching to the versions and config that petsc needs, but I do
> need the complete thing. Can I somehow tell petsc to also build
> the standalone tools?

PETSc only needs or wants the library.  In the pkg-metis CMakeLists.txt
file, there is a line

  #add_subdirectory("programs")

which needs to be uncommented to get the programs.  Someone could add a
conditional and plumb it into metis.py to make a PETSc configure option.
That would be a welcome contribution.


[petsc-users] --download-metis and build of stand-alone tools

2016-10-31 Thread Klaij, Christiaan
Satish,

I've noticed that SuperLU depends on metis and parmetis and that
PETSc downloads the versions 5.1.0-p3 and 4.0.3-p3. These are
different from the Karypis latest stable versions (without the
-p3). Do I really need these -p3 versions?

If so, after configure, compilation and installation by petsc, it
seems that the stand-alone programs such as gpmetis are not being
build and installed. That's a problem for me. I don't mind
switching to the versions and config that petsc needs, but I do
need the complete thing. Can I somehow tell petsc to also build
the standalone tools?

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/New-SCREENIN-JIP-open-for-participation.htm



Re: [petsc-users] petsc 3.7.4 with superlu_dist install problem

2016-10-27 Thread Klaij, Christiaan
Satish,

Thanks for the tip, that's very convenient!

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Workshop-Optimaliseren-is-ook-innoveren-15-november.htm


From: Satish Balay <ba...@mcs.anl.gov>
Sent: Wednesday, October 26, 2016 5:32 PM
To: petsc-users
Cc: Klaij, Christiaan
Subject: Re: [petsc-users] petsc 3.7.4 with superlu_dist install problem

One additional note - there is this option thats thats useful for your use case 
of downloading tarballs separately..

>>>
$ ./configure --with-packages-dir=$HOME/tmp --download-superlu
===
 Configuring PETSc to compile on your system
===
Download the following packages to /home/balay/tmp

superlu ['git://https://github.com/xiaoyeli/superlu', 
'https://github.com/xiaoyeli/superlu/archive/7e10c8a.tar.gz']

Then run the script again
<<<

It tells you exactly the URLs that you should download - for the
packages that you are installing..

Satish


On Wed, 26 Oct 2016, Satish Balay wrote:

> As you can see - the dir names don't match.
>
> petsc-3.7 uses: 
> https://github.com/xiaoyeli/superlu_dist/archive/0b5369f.tar.gz
>
> If you wish to try version 5.1.2 [which is not the default for this version 
> of PETSc] - you can try:
>
> https://github.com/xiaoyeli/superlu_dist/archive/v5.1.2.tar.gz
>
> Alternatively:
>
> cd 
> /projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.4-dbg/linux_64bit/externalpackages
> mv SuperLU_DIST_5.1.2 superlu_dist_5.1.2
>
> rerun configure [with the same options as before]
>
> Satish
>
> On Wed, 26 Oct 2016, Klaij, Christiaan wrote:
>
> > Satish,
> >
> > I'm having a similar problem with SuperLU_DIST, attached is the
> > configure.log. I've noticed that the extraction of the tarball
> > works fine, yet it gives the "unable to download" message:
> >
> >   Checking for a functional SuperLU_DIST
> >   Looking for SUPERLU_DIST at git.superlu_dist, 
> > hg.superlu_dist or a directory starting with superlu_dist
> >   Could not locate an existing copy of SUPERLU_DIST:
> > ['metis-5.1.0-p3', 'parmetis-4.0.3-p3']
> >   Downloading SuperLU_DIST
> >   
> > ===
> >   Trying to download 
> > file:///projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/src/superlu_dist_5.1.2.tar.gz
> >  for SUPERLU_DIST
> >   
> > ===
> >
> > Downloading 
> > file:///projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/src/superlu_dist_5.1.2.tar.gz
> >  to 
> > /projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.4-dbg/linux_64bit/externalpackages/_d_superlu_dist_5.1.2.tar.gz
> > Extracting 
> > /projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.4-dbg/linux_64bit/externalpackages/_d_superlu_dist_5.1.2.tar.gz
> > Executing: cd 
> > /projects/developers/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.4-dbg/linux_64bit/externalpackages;
> >  chmod -R a+r SuperLU_DIST_5.1.2;find  SuperLU_DIST_5.1.2 -type d -name "*" 
> > -exec chmod a+rx {} \;
> > Looking for SUPERLU_DIST at git.superlu_dist, 
> > hg.superlu_dist or a directory starting with superlu_dist
> > Could not locate an existing copy of SUPERLU_DIST:
> >   ['metis-5.1.0-p3', 'parmetis-4.0.3-p3', 
> > 'SuperLU_DIST_5.1.2']
> >   ERROR: Failed to download SUPERLU_DIST
> >
> > Chris
> >
> >
> > dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> > MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> >
> > MARIN news: 
> > http://www.marin.nl/web/News/News-items/SSSRIMARIN-seminar-November-2-Shanghai.htm
> >
> >
>
>



[petsc-users] error with wrong tarball in path/to/package

2016-10-25 Thread Klaij, Christiaan

Here is a small complaint about the error message "unable to
download" that is given when using
--download-PACKAGENAME=/PATH/TO/package.tar.gz with the wrong
tarball. For example, for my previous install, I was using
petsc-3.5.3 with:

--download-ml=/path/to/ml-6.2-win.tar.gz

Using the same file with 3.7.4 gives this error message

***
 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):
---
Unable to download ML
Failed to download ML
***

My guess from ml.py is that I should now download:

https://bitbucket.org/petsc/pkg-ml/get/v6.2-p4.tar.gz

and that you are somehow checking that the file specified in the
path matches this file (name, hash, ...)?

If so, "unable to download" is a bit confusing, I wasted some
time looking at the file system and the file:// protocol, and
annoyed the sysadmins... May I suggest to replace the message
with "wrong version" or something?

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Workshop-Optimaliseren-is-ook-innoveren-15-november.htm



Re: [petsc-users] block matrix without MatCreateNest

2016-09-14 Thread Klaij, Christiaan
Jed,

Just a reminder, you haven't responded to this thread, notably
Matt's question below whether you can fix the L2G mapping.

Chris

> > From: Matthew Knepley <knep...@gmail.com>
> > Sent: Tuesday, August 02, 2016 12:28 AM
> > To: Klaij, Christiaan
> > Cc: petsc-users@mcs.anl.gov; Jed Brown
> > Subject: Re: [petsc-users] block matrix without MatCreateNest
> >
> > On Mon, Aug 1, 2016 at 9:36 AM, Klaij, Christiaan <c.kl...@marin.nl>
> wrote:
> >
> > Matt,
> >
> >
> > 1) great!
> >
> >
> > 2) ??? that's precisely why I paste the output of "cat
mattry.F90"
> in the emails, so you have a small example that produces the errors I
> mention. Now I'm also attaching it to this email.
> >
> > Okay, I have gone through it. You are correct that it is completely
> broken.
> >
> > The way that MatNest currently works is that it trys to use L2G
mappings
> from individual blocks
> > and then builds a composite L2G map for the whole matrix. This is
> obviously incompatible with
> > the primary use case, and should be changed to break up the full L2G
> into one for each block.
> >
> > Jed, can you fix this? I am not sure I know enough about how Nest
works.
> >
> >Matt
> >
> > Thanks,
> >
> > Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/MARIN-at-Monaco-Yacht-Show-September-28October-1.htm



Re: [petsc-users] block matrix without MatCreateNest

2016-08-04 Thread Klaij, Christiaan
OK, looking forward to the fix!

Related to this, the preallocation would need to depend on the
type that is given at runtime, say

if type=XXX, call MatXXXSetPreallocation()

That would work for say seqaij and mpiaij, probably even without
the if-statement, right? And since there's no
MatNestSetPreallocation, should one get the submats and
preallocate those if type=nest?

Chris

> Date: Tue, 2 Aug 2016 08:49:36 -0500
> From: Matthew Knepley <knep...@gmail.com>
> To: "Klaij, Christiaan" <c.kl...@marin.nl>
> Cc: "petsc-users@mcs.anl.gov" <petsc-users@mcs.anl.gov>
> Subject: Re: [petsc-users] block matrix without MatCreateNest
> Message-ID:
> 

Re: [petsc-users] block matrix without MatCreateNest

2016-08-02 Thread Klaij, Christiaan
Thanks for your help! Going from individual blocks to a whole
matrix makes perfect sense if the blocks are readily available or
needed as fully functional matrices. Don't change that! Maybe add
the opposite?

I'm surprised it's broken though: on this mailing list several
petsc developers have stated on several occasions (and not just
to me) things like "you should never have a matnest", "you should
have a mat then change the type at runtime", "snes ex70 is not
the intended use" and so on.

I fully appreciate the benefit of having a format-independent
assembly and switching mat type from aij to nest depending on the
preconditioner. And given the manual and the statements on this
list, I thought this would be standard practice and therefore
thoroughly tested. But now I get the impression it has never
worked...

Chris


> From: Matthew Knepley <knep...@gmail.com>
> Sent: Tuesday, August 02, 2016 12:28 AM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov; Jed Brown
> Subject: Re: [petsc-users] block matrix without MatCreateNest
>
> On Mon, Aug 1, 2016 at 9:36 AM, Klaij, Christiaan <c.kl...@marin.nl> wrote:
>
> Matt,
>
>
> 1) great!
>
>
> 2) ??? that's precisely why I paste the output of "cat mattry.F90" in the 
> emails, so you have a small example that produces the errors I mention. Now 
> I'm also attaching it to this email.
>
> Okay, I have gone through it. You are correct that it is completely broken.
>
> The way that MatNest currently works is that it trys to use L2G mappings from 
> individual blocks
> and then builds a composite L2G map for the whole matrix. This is obviously 
> incompatible with
> the primary use case, and should be changed to break up the full L2G into one 
> for each block.
>
> Jed, can you fix this? I am not sure I know enough about how Nest works.
>
>Matt
>
> Thanks,
>
> Chris


dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Ship design in EU project 
Holiship<http://www.marin.nl/web/News/News-items/Ship-design-in-EU-project-Holiship.htm>




Re: [petsc-users] block matrix without MatCreateNest

2016-08-01 Thread Klaij, Christiaan
Matt,


1) great!


2) ??? that's precisely why I paste the output of "cat mattry.F90" in the 
emails, so you have a small example that produces the errors I mention. Now I'm 
also attaching it to this email.


Thanks,

Chris

dr. ir. Christiaan Klaij | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | c.kl...@marin.nl<mailto:c.kl...@marin.nl> | 
www.marin.nl<http://www.marin.nl>

[LinkedIn]<https://www.linkedin.com/company/marin> [YouTube] 
<http://www.youtube.com/marinmultimedia>  [Twitter] 
<https://twitter.com/MARIN_nieuws>  [Facebook] 
<https://www.facebook.com/marin.wageningen>
MARIN news: Vice Admiraal De Waard maakt virtuele proefvaart op MARIN’s FSSS 
(Dutch 
only)<http://www.marin.nl/web/News/News-items/Vice-Admiraal-De-Waard-maakt-virtuele-proefvaart-op-MARINs-FSSS-Dutch-only.htm>


From: Matthew Knepley <knep...@gmail.com>
Sent: Monday, August 01, 2016 4:15 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] block matrix without MatCreateNest

On Mon, Aug 1, 2016 at 8:59 AM, Klaij, Christiaan 
<c.kl...@marin.nl<mailto:c.kl...@marin.nl>> wrote:
Matt,

Thanks for your suggestions. Here's the outcome:

1) without the "if type=nest" protection, I get a "Cannot
locate function MatNestSetSubMats_C" error when using
type mpiaij, see below.

That is a bug. It should be using PetscTryMethod() there, not PetscUseMethod(). 
We will fix it. That way
it can be called for any matrix type, which is the intention.

2) with the isg in a proper array, I get the same "Invalid
Pointer to Object" error, see below.

Can you send a small example that we can run? It obviously should work this way.

  Thanks,

 Matt

Chris


$ cat mattry.F90
program mattry

  use petscksp
  implicit none
#include 

  PetscInt :: n=4   ! setting 4 cells per process

  PetscErrorCode :: ierr
  PetscInt   :: size,rank,i
  Mat:: A,A02
  MatType:: type
  IS :: isg(3)
  IS :: isl(3)
  ISLocalToGlobalMapping :: map

  integer, allocatable, dimension(:) :: idx

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
  call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr)
  call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr);CHKERRQ(ierr)

  ! local index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isl(1),ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isl(2),ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isl(3),ierr);CHKERRQ(ierr)
!  call ISView(isl3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! global index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1+rank*3*n, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isg(1),ierr);CHKERRQ(ierr)
  call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isg(2),ierr); 
CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isg(3),ierr); 
CHKERRQ(ierr)
!  call ISView(isg3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! local-to-global mapping
  allocate(idx(3*n))
  idx=(/ (i-1+rank*3*n, i=1,3*n) /)
  call 
ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD,1,3*n,idx,PETSC_COPY_VALUES,map,ierr);
 CHKERRQ(ierr)
!  call ISLocalToGlobalMappingView(map,PETSC_VIEWER_STDOUT_WORLD,ierr); 
CHKERRQ(ierr)
  deallocate(idx)

  ! create the 3-by-3 block matrix
  call MatCreate(PETSC_COMM_WORLD,A,ierr); CHKERRQ(ierr)
  call MatSetSizes(A,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,ierr); CHKERRQ(ierr)
  call MatSetUp(A,ierr); CHKERRQ(ierr)
  call MatSetOptionsPrefix(A,"A_",ierr); CHKERRQ(ierr)
  call MatSetLocalToGlobalMapping(A,map,map,ierr); CHKERRQ(ierr)
  call MatSetFromOptions(A,ierr); CHKERRQ(ierr)

  ! setup nest
  call MatNestSetSubMats(A,3,isg,3,isg,PETSC_NULL_OBJECT,ierr); CHKERRQ(ierr)

  ! set diagonal of block A02 to 0.65
  call MatGetLocalSubmatrix(A,isl(1),isl(3),A02,ierr); CHKERRQ(ierr)
  do i=1,n
 call MatSetValuesLocal(A02,1,i-1,1,i-1,0.65d0,INSERT_VALUES,ierr); 
CHKERRQ(ierr)
  end do
  call MatRestoreLocalSubMatrix(A,isl(1),isl(3),A02,ierr); CHKERRQ(ierr)
  call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)
  call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)

  ! verify
  call MatGetSubmatrix(A,isg(1),isg(3),MAT_INITIAL_MATRIX,A02,ierr); 
CHKERRQ(ierr)
  call MatView(A02,PETSC_VIEWER_STDOUT_WORLD,ierr);CHKERRQ(ierr)

  call PetscFinalize(ierr)

end program mattry


$ mpiexec -n 2 ./mattry -A_mat_type mpiaij
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Cannot loc

Re: [petsc-users] block matrix without MatCreateNest

2016-08-01 Thread Klaij, Christiaan
Matt,

Thanks for your suggestions. Here's the outcome:

1) without the "if type=nest" protection, I get a "Cannot
locate function MatNestSetSubMats_C" error when using
type mpiaij, see below.

2) with the isg in a proper array, I get the same "Invalid
Pointer to Object" error, see below.

Chris


$ cat mattry.F90
program mattry

  use petscksp
  implicit none
#include 

  PetscInt :: n=4   ! setting 4 cells per process

  PetscErrorCode :: ierr
  PetscInt   :: size,rank,i
  Mat:: A,A02
  MatType:: type
  IS :: isg(3)
  IS :: isl(3)
  ISLocalToGlobalMapping :: map

  integer, allocatable, dimension(:) :: idx

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
  call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr)
  call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr);CHKERRQ(ierr)

  ! local index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isl(1),ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isl(2),ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isl(3),ierr);CHKERRQ(ierr)
!  call ISView(isl3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! global index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1+rank*3*n, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isg(1),ierr);CHKERRQ(ierr)
  call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isg(2),ierr); 
CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isg(3),ierr); 
CHKERRQ(ierr)
!  call ISView(isg3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! local-to-global mapping
  allocate(idx(3*n))
  idx=(/ (i-1+rank*3*n, i=1,3*n) /)
  call 
ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD,1,3*n,idx,PETSC_COPY_VALUES,map,ierr);
 CHKERRQ(ierr)
!  call ISLocalToGlobalMappingView(map,PETSC_VIEWER_STDOUT_WORLD,ierr); 
CHKERRQ(ierr)
  deallocate(idx)

  ! create the 3-by-3 block matrix
  call MatCreate(PETSC_COMM_WORLD,A,ierr); CHKERRQ(ierr)
  call MatSetSizes(A,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,ierr); CHKERRQ(ierr)
  call MatSetUp(A,ierr); CHKERRQ(ierr)
  call MatSetOptionsPrefix(A,"A_",ierr); CHKERRQ(ierr)
  call MatSetLocalToGlobalMapping(A,map,map,ierr); CHKERRQ(ierr)
  call MatSetFromOptions(A,ierr); CHKERRQ(ierr)

  ! setup nest
  call MatNestSetSubMats(A,3,isg,3,isg,PETSC_NULL_OBJECT,ierr); CHKERRQ(ierr)

  ! set diagonal of block A02 to 0.65
  call MatGetLocalSubmatrix(A,isl(1),isl(3),A02,ierr); CHKERRQ(ierr)
  do i=1,n
 call MatSetValuesLocal(A02,1,i-1,1,i-1,0.65d0,INSERT_VALUES,ierr); 
CHKERRQ(ierr)
  end do
  call MatRestoreLocalSubMatrix(A,isl(1),isl(3),A02,ierr); CHKERRQ(ierr)
  call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)
  call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)

  ! verify
  call MatGetSubmatrix(A,isg(1),isg(3),MAT_INITIAL_MATRIX,A02,ierr); 
CHKERRQ(ierr)
  call MatView(A02,PETSC_VIEWER_STDOUT_WORLD,ierr);CHKERRQ(ierr)

  call PetscFinalize(ierr)

end program mattry


$ mpiexec -n 2 ./mattry -A_mat_type mpiaij
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Cannot locate function MatNestSetSubMats_C in object
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.3, Jul, 24, 2016
[0]PETSC ERROR: ./mattry


 on a linux_64bit_debug named 
lin0322.marin.local by cklaij Mon Aug  1 15:42:10 2016
[0]PETSC ERROR: Configure options 
--with-mpi-dir=/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/openmpi/1.8.7 
--with-clanguage=c++ --with-x=1 --with-debugging=1 
--with-blas-lapack-dir=/opt/intel/composer_xe_2015.1.133/mkl 
--with-shared-libraries=0
[0]PETSC ERROR: #1 MatNestSetSubMats() line 1105 in 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.3-dbg/src/mat/impls/nest/matnest.c
--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 56.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--
[1]PETSC ERROR: 

[1]PETSC ERROR: Caught signal number 15 Terminate: Some process (or the batch 
system) has told 

Re: [petsc-users] block matrix without MatCreateNest

2016-08-01 Thread Klaij, Christiaan
Matt, Barry

Thanks for your replies! I've added a call to MatNestSetSubMats()
but something's still wrong, see below.

Chris


$ cat mattry.F90
program mattry

  use petscksp
  implicit none
#include 

  PetscInt :: n=4   ! setting 4 cells per process

  PetscErrorCode :: ierr
  PetscInt   :: size,rank,i
  Mat:: A,A02
  MatType:: type
  IS :: isg0,isg1,isg2
  IS :: isl0,isl1,isl2
  ISLocalToGlobalMapping :: map

  integer, allocatable, dimension(:) :: idx

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
  call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr)
  call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr);CHKERRQ(ierr)

  ! local index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isl0,ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isl1,ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isl2,ierr);CHKERRQ(ierr)
!  call ISView(isl3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! global index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1+rank*3*n, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isg0,ierr);CHKERRQ(ierr)
  call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isg1,ierr); 
CHKERRQ(ierr)
  call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isg2,ierr); 
CHKERRQ(ierr)
!  call ISView(isg3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! local-to-global mapping
  allocate(idx(3*n))
  idx=(/ (i-1+rank*3*n, i=1,3*n) /)
  call 
ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD,1,3*n,idx,PETSC_COPY_VALUES,map,ierr);
 CHKERRQ(ierr)
!  call ISLocalToGlobalMappingView(map,PETSC_VIEWER_STDOUT_WORLD,ierr); 
CHKERRQ(ierr)
  deallocate(idx)

  ! create the 3-by-3 block matrix
  call MatCreate(PETSC_COMM_WORLD,A,ierr); CHKERRQ(ierr)
  call MatSetSizes(A,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,ierr); CHKERRQ(ierr)
  call MatSetUp(A,ierr); CHKERRQ(ierr)
  call MatSetOptionsPrefix(A,"A_",ierr); CHKERRQ(ierr)
  call MatSetLocalToGlobalMapping(A,map,map,ierr); CHKERRQ(ierr)
  call MatSetFromOptions(A,ierr); CHKERRQ(ierr)

  ! setup nest
  call MatGetType(A,type,ierr); CHKERRQ(ierr)
  if (type.eq."nest") then
 call 
MatNestSetSubMats(A,3,(/isg0,isg1,isg2/),3,(/isg0,isg1,isg2/),PETSC_NULL_OBJECT,ierr);
 CHKERRQ(ierr)
  end if

  ! set diagonal of block A02 to 0.65
  call MatGetLocalSubmatrix(A,isl0,isl2,A02,ierr); CHKERRQ(ierr)
  do i=1,n
 call MatSetValuesLocal(A02,1,i-1,1,i-1,0.65d0,INSERT_VALUES,ierr); 
CHKERRQ(ierr)
  end do
  call MatRestoreLocalSubMatrix(A,isl0,isl2,A02,ierr); CHKERRQ(ierr)
  call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)
  call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)

  ! verify
  call MatGetSubmatrix(A,isg0,isg2,MAT_INITIAL_MATRIX,A02,ierr); CHKERRQ(ierr)
  call MatView(A02,PETSC_VIEWER_STDOUT_WORLD,ierr);CHKERRQ(ierr)

  call PetscFinalize(ierr)

end program mattry


$ mpiexec -n 2 ./mattry -A_mat_type nest
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Corrupt argument: 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: Invalid Pointer to Object: Parameter # 1
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.3, Jul, 24, 2016
[0]PETSC ERROR: ./mattry


 on a linux_64bit_debug named 
lin0322.marin.local by cklaij Mon Aug  1 09:54:07 2016
[0]PETSC ERROR: Configure options 
--with-mpi-dir=/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/openmpi/1.8.7 
--with-clanguage=c++ --with-x=1 --with-debugging=1 
--with-blas-lapack-dir=/opt/intel/composer_xe_2015.1.133/mkl 
--with-shared-libraries=0
[0]PETSC ERROR: #1 PetscObjectReference() line 534 in 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.3-dbg/src/sys/objects/inherit.c
[0]PETSC ERROR: #2 MatNestSetSubMats_Nest() line 1042 in 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.3-dbg/src/mat/impls/nest/matnest.c
[0]PETSC ERROR: #3 MatNestSetSubMats() line 1105 in 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.3-dbg/src/mat/impls/nest/matnest.c
[1]PETSC ERROR: - Error Message 
--
[1]PETSC ERROR: Corrupt argument: 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[1]PETSC ERROR: Invalid Pointer to Object: Parameter # 1
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 

Re: [petsc-users] block matrix without MatCreateNest

2016-07-30 Thread Klaij, Christiaan
Anyone?
(my guess is an if-statement, something like "if type nest then
setup nest"...)

> Date: Thu, 28 Jul 2016 08:38:54 +0000
> From: "Klaij, Christiaan" <c.kl...@marin.nl>
> To: "petsc-users@mcs.anl.gov" <petsc-users@mcs.anl.gov>
> Subject: [petsc-users] block matrix without MatCreateNest
> Message-ID: <1469695134232.97...@marin.nl>
> Content-Type: text/plain; charset="utf-8"
>
> I'm trying to understand how to assemble a block matrix in a
> format-independent manner, so that I can switch between types
> mpiaij and matnest.
>
> The manual states that the key to format-independent assembly is
> to use MatGetLocalSubMatrix. So, in the code below, I'm using
> this to assemble a 3-by-3 block matrix A and setting the diagonal
> of block A02. This seems to work for type mpiaij, but not for
> type matnest. What am I missing?
>
> Chris
>
>
> $ cat mattry.F90
> program mattry
>
>   use petscksp
>   implicit none
> #include 
>
>   PetscInt :: n=4   ! setting 4 cells per process
>
>   PetscErrorCode :: ierr
>   PetscInt   :: size,rank,i
>   Mat:: A,A02
>   IS :: isg0,isg1,isg2
>   IS :: isl0,isl1,isl2
>   ISLocalToGlobalMapping :: map
>
>   integer, allocatable, dimension(:) :: idx
>
>   call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
>   call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr)
>   call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr);CHKERRQ(ierr)
>
>   ! local index sets for 3 fields
>   allocate(idx(n))
>   idx=(/ (i-1, i=1,n) /)
>   call 
> ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isl0,ierr);CHKERRQ(ierr)
>   call 
> ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isl1,ierr);CHKERRQ(ierr)
>   call 
> ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isl2,ierr);CHKERRQ(ierr)
> !  call ISView(isl3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
>   deallocate(idx)
>
>   ! global index sets for 3 fields
>   allocate(idx(n))
>   idx=(/ (i-1+rank*3*n, i=1,n) /)
>   call 
> ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isg0,ierr);CHKERRQ(ierr)
>   call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isg1,ierr); 
> CHKERRQ(ierr)
>   call 
> ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isg2,ierr); 
> CHKERRQ(ierr)
> !  call ISView(isg3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
>   deallocate(idx)
>
>   ! local-to-global mapping
>   allocate(idx(3*n))
>   idx=(/ (i-1+rank*3*n, i=1,3*n) /)
>   call 
> ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD,1,3*n,idx,PETSC_COPY_VALUES,map,ierr);
>  CHKERRQ(ierr)
> !  call ISLocalToGlobalMappingView(map,PETSC_VIEWER_STDOUT_WORLD,ierr); 
> CHKERRQ(ierr)
>   deallocate(idx)
>
>   ! create the 3-by-3 block matrix
>   call MatCreate(PETSC_COMM_WORLD,A,ierr); CHKERRQ(ierr)
>   call MatSetSizes(A,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,ierr); CHKERRQ(ierr)
> !  call MatSetType(A,MATNEST,ierr); CHKERRQ(ierr)
>   call MatSetUp(A,ierr); CHKERRQ(ierr)
>   call MatSetOptionsPrefix(A,"A_",ierr); CHKERRQ(ierr)
>   call MatSetLocalToGlobalMapping(A,map,map,ierr); CHKERRQ(ierr)
>   call MatSetFromOptions(A,ierr); CHKERRQ(ierr)
>
>   ! set diagonal of block A02 to 0.65
>   call MatGetLocalSubmatrix(A,isl0,isl2,A02,ierr); CHKERRQ(ierr)
>   do i=1,n
>  call MatSetValuesLocal(A02,1,i-1,1,i-1,0.65d0,INSERT_VALUES,ierr); 
> CHKERRQ(ierr)
>   end do
>   call MatRestoreLocalSubMatrix(A,isl0,isl2,A02,ierr); CHKERRQ(ierr)
>   call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)
>   call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)
>
>   ! verify
>   call MatGetSubmatrix(A,isg0,isg2,MAT_INITIAL_MATRIX,A02,ierr); CHKERRQ(ierr)
>   call MatView(A02,PETSC_VIEWER_STDOUT_WORLD,ierr);CHKERRQ(ierr)
>
>   call PetscFinalize(ierr)
>
> end program mattry
>
> $ mpiexec -n 2 ./mattry -A_mat_type mpiaij
> Mat Object: 2 MPI processes
>   type: mpiaij
> row 0: (0, 0.65)
> row 1: (1, 0.65)
> row 2: (2, 0.65)
> row 3: (3, 0.65)
> row 4: (4, 0.65)
> row 5: (5, 0.65)
> row 6: (6, 0.65)
> row 7: (7, 0.65)
>
> $ mpiexec -n 2 ./mattry -A_mat_type nest
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Pointer: Parameter # 3
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.3, Jul, 24, 2016
> [0]PETSC ERROR: ./mattry  

[petsc-users] block matrix without MatCreateNest

2016-07-28 Thread Klaij, Christiaan
I'm trying to understand how to assemble a block matrix in a
format-independent manner, so that I can switch between types
mpiaij and matnest.

The manual states that the key to format-independent assembly is
to use MatGetLocalSubMatrix. So, in the code below, I'm using
this to assemble a 3-by-3 block matrix A and setting the diagonal
of block A02. This seems to work for type mpiaij, but not for
type matnest. What am I missing?

Chris


$ cat mattry.F90
program mattry

  use petscksp
  implicit none
#include 

  PetscInt :: n=4   ! setting 4 cells per process

  PetscErrorCode :: ierr
  PetscInt   :: size,rank,i
  Mat:: A,A02
  IS :: isg0,isg1,isg2
  IS :: isl0,isl1,isl2
  ISLocalToGlobalMapping :: map

  integer, allocatable, dimension(:) :: idx

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
  call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr)
  call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr);CHKERRQ(ierr)

  ! local index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isl0,ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isl1,ierr);CHKERRQ(ierr)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isl2,ierr);CHKERRQ(ierr)
!  call ISView(isl3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! global index sets for 3 fields
  allocate(idx(n))
  idx=(/ (i-1+rank*3*n, i=1,n) /)
  call 
ISCreateGeneral(PETSC_COMM_WORLD,n,idx,PETSC_COPY_VALUES,isg0,ierr);CHKERRQ(ierr)
  call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+n,PETSC_COPY_VALUES,isg1,ierr); 
CHKERRQ(ierr)
  call ISCreateGeneral(PETSC_COMM_WORLD,n,idx+2*n,PETSC_COPY_VALUES,isg2,ierr); 
CHKERRQ(ierr)
!  call ISView(isg3,PETSC_VIEWER_STDOUT_WORLD,ierr); CHKERRQ(ierr)
  deallocate(idx)

  ! local-to-global mapping
  allocate(idx(3*n))
  idx=(/ (i-1+rank*3*n, i=1,3*n) /)
  call 
ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD,1,3*n,idx,PETSC_COPY_VALUES,map,ierr);
 CHKERRQ(ierr)
!  call ISLocalToGlobalMappingView(map,PETSC_VIEWER_STDOUT_WORLD,ierr); 
CHKERRQ(ierr)
  deallocate(idx)

  ! create the 3-by-3 block matrix
  call MatCreate(PETSC_COMM_WORLD,A,ierr); CHKERRQ(ierr)
  call MatSetSizes(A,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,ierr); CHKERRQ(ierr)
!  call MatSetType(A,MATNEST,ierr); CHKERRQ(ierr)
  call MatSetUp(A,ierr); CHKERRQ(ierr)
  call MatSetOptionsPrefix(A,"A_",ierr); CHKERRQ(ierr)
  call MatSetLocalToGlobalMapping(A,map,map,ierr); CHKERRQ(ierr)
  call MatSetFromOptions(A,ierr); CHKERRQ(ierr)

  ! set diagonal of block A02 to 0.65
  call MatGetLocalSubmatrix(A,isl0,isl2,A02,ierr); CHKERRQ(ierr)
  do i=1,n
 call MatSetValuesLocal(A02,1,i-1,1,i-1,0.65d0,INSERT_VALUES,ierr); 
CHKERRQ(ierr)
  end do
  call MatRestoreLocalSubMatrix(A,isl0,isl2,A02,ierr); CHKERRQ(ierr)
  call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)
  call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr)

  ! verify
  call MatGetSubmatrix(A,isg0,isg2,MAT_INITIAL_MATRIX,A02,ierr); CHKERRQ(ierr)
  call MatView(A02,PETSC_VIEWER_STDOUT_WORLD,ierr);CHKERRQ(ierr)

  call PetscFinalize(ierr)

end program mattry

$ mpiexec -n 2 ./mattry -A_mat_type mpiaij
Mat Object: 2 MPI processes
  type: mpiaij
row 0: (0, 0.65)
row 1: (1, 0.65)
row 2: (2, 0.65)
row 3: (3, 0.65)
row 4: (4, 0.65)
row 5: (5, 0.65)
row 6: (6, 0.65)
row 7: (7, 0.65)

$ mpiexec -n 2 ./mattry -A_mat_type nest
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Null Pointer: Parameter # 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.3, Jul, 24, 2016
[0]PETSC ERROR: ./mattry


 on a linux_64bit_debug named 
lin0322.marin.local by cklaij Thu Jul 28 10:31:04 2016
[0]PETSC ERROR: Configure options 
--with-mpi-dir=/home/cklaij/ReFRESCO/Dev/trunk/Libs/install/openmpi/1.8.7 
--with-clanguage=c++ --with-x=1 --with-debugging=1 
--with-blas-lapack-dir=/opt/intel/composer_xe_2015.1.133/mkl 
--with-shared-libraries=0
[0]PETSC ERROR: #1 MatNestFindIS() line 298 in 
/home/cklaij/ReFRESCO/Dev/trunk/Libs/build/petsc/3.7.3-dbg/src/mat/impls/nest/matnest.c
[0]PETSC ERROR: [1]PETSC ERROR: - Error Message 
--
[1]PETSC ERROR: Null argument, when expecting valid pointer
[1]PETSC ERROR: Null Pointer: Parameter # 3
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[1]PETSC ERROR: 

Re: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to MatNestGetISs in fortran

2015-05-01 Thread Klaij, Christiaan
Satish,

Today, I tried again and now I also get:

 0
 0

Sorry, I must have done something wrong earlier... Thanks again,

Chris


From: Satish Balay ba...@mcs.anl.gov
Sent: Thursday, April 30, 2015 9:54 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to 
MatNestGetISs in fortran

Hm - it works for me with the test code you sent..

petsc-3.5.3 before patch

balay@asterix /home/balay/download-pine/x
$ ./ex1f
0
 40357424

petsc-3.5.3 after patch

balay@asterix /home/balay/download-pine/x
$ ./ex1f
0
0
balay@asterix /home/balay/download-pine/x

Satish


$ On Thu, 30 Apr 2015, Klaij, Christiaan wrote:

 Satish,

 Thanks for the files! I've copied them to the correct location in my 
 petsc-3.5.3 dir and rebuild the whole thing from scratch but I still get the 
 same memory corruption for the example below...

 Chris
 
 From: Satish Balay ba...@mcs.anl.gov
 Sent: Friday, April 24, 2015 10:54 PM
 To: Klaij, Christiaan
 Cc: petsc-users@mcs.anl.gov
 Subject: Re: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to 
 MatNestGetISs in fortran

 Sorry for dropping the ball on this issue. I pushed the following fix to 
 maint branch.

 https://bitbucket.org/petsc/petsc/commits/3a4d7b9a6c83003720b45dc0635fc32ea52a4309

 To use this change with petsc-3.5.3 - you can drop in the attached 
 replacement files at:

 src/mat/impls/nest/ftn-custom/zmatnestf.c
 src/mat/impls/nest/ftn-auto/matnestf.c

 Satish

 On Fri, 24 Apr 2015, Klaij, Christiaan wrote:

  Barry, Satish
 
  Any news on this issue?
 
  Chris
 
   On Feb 12, 2015, at 07:13:08 CST, Smith, Barry bsm...@mcs.anl.gov wrote:
  
  Thanks for reporting this. Currently the Fortran stub for this 
   function is generated automatically which means it does not have the 
   logic for handling a PETSC_NULL_OBJECT argument.
  
   Satish, could you please see if you can add a custom fortran stub for 
   this function in maint?
  
 Thanks
  
  Barry
  
On Feb 12, 2015, at 3:02 AM, Klaij, Christiaan C.Klaij at marin.nl 
wrote:
   
Using petsc-3.5.3, I noticed that PETSC_NULL_OBJECT gets corrupt after 
calling MatNestGetISs in fortran. Here's a small example:
   
$ cat fieldsplittry2.F90
program fieldsplittry2
   
 use petscksp
 implicit none
#include finclude/petsckspdef.h
   
 PetscErrorCode :: ierr
 PetscInt   :: size,i,j,start,end,n=4,numsplit=1
 PetscScalar:: zero=0.0,one=1.0
 Vec:: diag3,x,b
 Mat:: A,subA(4),myS
 PC :: pc,subpc(2)
 KSP:: ksp,subksp(2)
 IS :: isg(2)
   
 call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
 call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr);
   
 ! vectors
 call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); 
CHKERRQ(ierr)
 call VecSet(diag3,one,ierr); CHKERRQ(ierr)
   
 call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); 
CHKERRQ(ierr)
 call VecSet(x,zero,ierr); CHKERRQ(ierr)
   
 call VecDuplicate(x,b,ierr); CHKERRQ(ierr)
 call VecSet(b,one,ierr); CHKERRQ(ierr)
   
 ! matrix a00
 call 
MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr)
 call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr)
 call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   
 ! matrix a01
 call 
MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr)
 call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr);
 do i=start,end-1
j=mod(i,size*n)
call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr)
 end do
 call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   
 ! matrix a10
 call  
MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr)
   
 ! matrix a11 (empty)
 call 
MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr)
 call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   
 ! nested mat [a00,a01;a10,a11]
 call 
MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr)
 call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
print

Re: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to MatNestGetISs in fortran

2015-04-30 Thread Klaij, Christiaan
Satish,

Thanks for the files! I've copied them to the correct location in my 
petsc-3.5.3 dir and rebuild the whole thing from scratch but I still get the 
same memory corruption for the example below...

Chris

From: Satish Balay ba...@mcs.anl.gov
Sent: Friday, April 24, 2015 10:54 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to 
MatNestGetISs in fortran

Sorry for dropping the ball on this issue. I pushed the following fix to maint 
branch.

https://bitbucket.org/petsc/petsc/commits/3a4d7b9a6c83003720b45dc0635fc32ea52a4309

To use this change with petsc-3.5.3 - you can drop in the attached replacement 
files at:

src/mat/impls/nest/ftn-custom/zmatnestf.c
src/mat/impls/nest/ftn-auto/matnestf.c

Satish

On Fri, 24 Apr 2015, Klaij, Christiaan wrote:

 Barry, Satish

 Any news on this issue?

 Chris

  On Feb 12, 2015, at 07:13:08 CST, Smith, Barry bsm...@mcs.anl.gov wrote:
 
 Thanks for reporting this. Currently the Fortran stub for this function 
  is generated automatically which means it does not have the logic for 
  handling a PETSC_NULL_OBJECT argument.
 
  Satish, could you please see if you can add a custom fortran stub for 
  this function in maint?
 
Thanks
 
 Barry
 
   On Feb 12, 2015, at 3:02 AM, Klaij, Christiaan C.Klaij at marin.nl 
   wrote:
  
   Using petsc-3.5.3, I noticed that PETSC_NULL_OBJECT gets corrupt after 
   calling MatNestGetISs in fortran. Here's a small example:
  
   $ cat fieldsplittry2.F90
   program fieldsplittry2
  
use petscksp
implicit none
   #include finclude/petsckspdef.h
  
PetscErrorCode :: ierr
PetscInt   :: size,i,j,start,end,n=4,numsplit=1
PetscScalar:: zero=0.0,one=1.0
Vec:: diag3,x,b
Mat:: A,subA(4),myS
PC :: pc,subpc(2)
KSP:: ksp,subksp(2)
IS :: isg(2)
  
call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr);
  
! vectors
call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); 
   CHKERRQ(ierr)
call VecSet(diag3,one,ierr); CHKERRQ(ierr)
  
call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr)
call VecSet(x,zero,ierr); CHKERRQ(ierr)
  
call VecDuplicate(x,b,ierr); CHKERRQ(ierr)
call VecSet(b,one,ierr); CHKERRQ(ierr)
  
! matrix a00
call 
   MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr)
call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr)
call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  
! matrix a01
call 
   MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr)
call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr);
do i=start,end-1
   j=mod(i,size*n)
   call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr)
end do
call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  
! matrix a10
call  MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr)
  
! matrix a11 (empty)
call 
   MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr)
call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  
! nested mat [a00,a01;a10,a11]
call 
   MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr)
call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   print *, PETSC_NULL_OBJECT
call MatNestGetISs(A,isg,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr);
   print *, PETSC_NULL_OBJECT
  
call PetscFinalize(ierr)
  
   end program fieldsplittry2
   $ ./fieldsplittry2
   0
39367824
   $
  
  
   dr. ir. Christiaan Klaij
   CFD Researcher
   Research  Development
   E mailto:C.Klaij at marin.nl
   T +31 317 49 33 44
  
  
   MARIN
   2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
   T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
  


 dr. ir. Christiaan Klaij
 CFD Researcher
 Research  Development
 E mailto:c.kl...@marin.nl
 T +31 317 49 33 44


 MARIN
 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
 T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl




Re: [petsc-users] PETSC_NULL_OBJECT gets corrupt after call to MatNestGetISs in fortran

2015-04-24 Thread Klaij, Christiaan
Barry, Satish

Any news on this issue?

Chris

 On Feb 12, 2015, at 07:13:08 CST, Smith, Barry bsm...@mcs.anl.gov wrote:

Thanks for reporting this. Currently the Fortran stub for this function is 
 generated automatically which means it does not have the logic for handling a 
 PETSC_NULL_OBJECT argument.

 Satish, could you please see if you can add a custom fortran stub for 
 this function in maint?

   Thanks

Barry

  On Feb 12, 2015, at 3:02 AM, Klaij, Christiaan C.Klaij at marin.nl wrote:
 
  Using petsc-3.5.3, I noticed that PETSC_NULL_OBJECT gets corrupt after 
  calling MatNestGetISs in fortran. Here's a small example:
 
  $ cat fieldsplittry2.F90
  program fieldsplittry2
 
   use petscksp
   implicit none
  #include finclude/petsckspdef.h
 
   PetscErrorCode :: ierr
   PetscInt   :: size,i,j,start,end,n=4,numsplit=1
   PetscScalar:: zero=0.0,one=1.0
   Vec:: diag3,x,b
   Mat:: A,subA(4),myS
   PC :: pc,subpc(2)
   KSP:: ksp,subksp(2)
   IS :: isg(2)
 
   call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
   call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr);
 
   ! vectors
   call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); 
  CHKERRQ(ierr)
   call VecSet(diag3,one,ierr); CHKERRQ(ierr)
 
   call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr)
   call VecSet(x,zero,ierr); CHKERRQ(ierr)
 
   call VecDuplicate(x,b,ierr); CHKERRQ(ierr)
   call VecSet(b,one,ierr); CHKERRQ(ierr)
 
   ! matrix a00
   call 
  MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr)
   call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr)
   call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 
   ! matrix a01
   call 
  MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr)
   call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr);
   do i=start,end-1
  j=mod(i,size*n)
  call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr)
   end do
   call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 
   ! matrix a10
   call  MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr)
 
   ! matrix a11 (empty)
   call 
  MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr)
   call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
 
   ! nested mat [a00,a01;a10,a11]
   call 
  MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr)
   call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
   call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  print *, PETSC_NULL_OBJECT
   call MatNestGetISs(A,isg,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr);
  print *, PETSC_NULL_OBJECT
 
   call PetscFinalize(ierr)
 
  end program fieldsplittry2
  $ ./fieldsplittry2
  0
   39367824
  $
 
 
  dr. ir. Christiaan Klaij
  CFD Researcher
  Research  Development
  E mailto:C.Klaij at marin.nl
  T +31 317 49 33 44
 
 
  MARIN
  2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
  T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
 


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

2015-04-11 Thread Klaij, Christiaan
Barry, Matt

Thanks, the total iteration count is already very useful. Looking forward to 
the custom monitors as well.

Chris

From: Barry Smith bsm...@mcs.anl.gov
Sent: Saturday, April 11, 2015 12:27 AM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

  Chris,

  I have added KSPGetTotalIterations() to the branch 
barry/add-ksp-total-iterations/master and next. After tests it will go into 
master

  Barry

 On Apr 10, 2015, at 8:07 AM, Klaij, Christiaan c.kl...@marin.nl wrote:

 Barry,

 Sure, I can call PCFieldSplitGetSubKSP() to get the fieldsplit_0
 ksp and then KSPGetIterationNumber, but what does this number
 mean?

 It appears to be the number of iterations of the last time that
 the subsystem was solved, right? If so, this corresponds to the
 last iteration of the coupled system, how about all the previous
 iterations?

 Chris
 
 From: Barry Smith bsm...@mcs.anl.gov
 Sent: Friday, April 10, 2015 2:48 PM
 To: Klaij, Christiaan
 Cc: petsc-users@mcs.anl.gov
 Subject: Re: [petsc-users] monitoring the convergence of fieldsplit 0 and 1

   Chris,

 It appears you should call PCFieldSplitGetSubKSP() and then get the 
 information you want out of the individual KSPs. If this doesn't work please 
 let us know.

   Barry

 On Apr 10, 2015, at 6:48 AM, Klaij, Christiaan c.kl...@marin.nl wrote:

 A question when using PCFieldSplit: for each linear iteration of
 the system, how many iterations for fielsplit 0 and 1?

 One way to find out is to run with -ksp_monitor,
 -fieldsplit_0_ksp_monitor and -fieldsplit_0_ksp_monitor. This
 gives the complete convergence history.

 Another way, suggested by Matt, is to use -ksp_monitor,
 -fieldsplit_0_ksp_converged_reason and
 -fieldsplit_1_ksp_converged_reason. This gives only the totals
 for fieldsplit 0 and 1 (but without saying for which one).

 Both ways require to somehow process the output, which is a bit
 inconvenient. Could KSPGetResidualHistory perhaps return (some)
 information on the subsystems' convergence for processing inside
 the code?

 Chris


 dr. ir. Christiaan Klaij
 CFD Researcher
 Research  Development
 E mailto:c.kl...@marin.nl
 T +31 317 49 33 44


 MARIN
 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
 T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl





[petsc-users] PETSC_NULL_OBJECT gets corrupt after call to MatNestGetISs in fortran

2015-02-12 Thread Klaij, Christiaan
Using petsc-3.5.3, I noticed that PETSC_NULL_OBJECT gets corrupt after calling 
MatNestGetISs in fortran. Here's a small example:

$ cat fieldsplittry2.F90
program fieldsplittry2

  use petscksp
  implicit none
#include finclude/petsckspdef.h

  PetscErrorCode :: ierr
  PetscInt   :: size,i,j,start,end,n=4,numsplit=1
  PetscScalar:: zero=0.0,one=1.0
  Vec:: diag3,x,b
  Mat:: A,subA(4),myS
  PC :: pc,subpc(2)
  KSP:: ksp,subksp(2)
  IS :: isg(2)

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
  call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr);

  ! vectors
  call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); CHKERRQ(ierr)
  call VecSet(diag3,one,ierr); CHKERRQ(ierr)

  call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr)
  call VecSet(x,zero,ierr); CHKERRQ(ierr)

  call VecDuplicate(x,b,ierr); CHKERRQ(ierr)
  call VecSet(b,one,ierr); CHKERRQ(ierr)

  ! matrix a00
  call 
MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr)
  call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr)
  call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)

  ! matrix a01
  call 
MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr)
  call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr);
  do i=start,end-1
 j=mod(i,size*n)
 call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr)
  end do
  call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)

  ! matrix a10
  call  MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr)

  ! matrix a11 (empty)
  call 
MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr)
  call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)

  ! nested mat [a00,a01;a10,a11]
  call 
MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr)
  call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
print *, PETSC_NULL_OBJECT
  call MatNestGetISs(A,isg,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr);
print *, PETSC_NULL_OBJECT

  call PetscFinalize(ierr)

end program fieldsplittry2
$ ./fieldsplittry2
 0
  39367824
$


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



Re: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm

2015-02-06 Thread Klaij, Christiaan
Hi Fabian,

After reading your thread, I think we have the same
discretization. Some thoughts:

- The stabilization by pressure-weighted interpolation should
  lead to a diagonally dominant system that can be solved by
  algebraic multigrid (done for example in CFX), see
  http://dx.doi.org/10.1080/10407790.2014.894448
  http://dx.doi.org/10.1016/j.jcp.2008.08.027

- If you go for fieldsplit with mild tolerances, the
  preconditioner will become variable and you might need FGMRES
  or GCR instead.

- As you point out, most preconditioners are designed for stable
  FEM. My experience is that those conclusions sometimes (but not
  always) hold for FVM. Due to the specific form of the
  stabilization, other choices might be feasible. More research
  is needed :-)

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



Re: [petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm

2015-02-06 Thread Klaij, Christiaan
Hi Dave,

My understanding is that stabilization by pressure weighted
interpolation in FVM is in essence a fourth order pressure
derivative with proper scaling. But in FVM it is usually
implemented in defect correction form: only part of the
stabilization is in the A11 block of the coupled matrix, the rest
is in de rhs vector. This makes the coupled system diagonally
dominant and solvable by AMG. Early work was done at Waterloo in
the late eighties and early nineties, see
http://dx.doi.org/10.2514/6.1996-297

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] how to change KSP of A00 inside the Schur complement?

2014-09-09 Thread Klaij, Christiaan
In the program below, I'm using PCFieldSplitGetSubKSP to get the
sub KSP's of a Schur fieldsplit preconditioner. I'm setting
fieldsplit_0 to BICG+ILU and fieldsplit_1 to CG+ICC. Running

$ ./fieldsplittry -ksp_view

shows that this works as expected (full output below). Now, I
would like to change the KSP of A00 inside the Schur complement,
so I'm running

$ ./fieldsplittry -fieldsplit_1_inner_ksp_type preonly 
-fieldsplit_1_inner_pc_type jacobi -ksp_view

(full output below). To my surprise, this shows the
fieldsplit_1_inner_ KSP to be BICG+ILU while the fieldsplit_0_
KSP is changed to preonly; the fieldsplit_0_ PC is still ILU (no
Jacobi anywhere).  What am I doing wrong this time?

The bottom line is: how do I set the KSP of A00 inside the Schur
complement to preonly+jacobi while keeping the other settings?
Preferably directly in the code without command-line arguments.

Chris


$ cat fieldsplittry.F90
program fieldsplittry

  use petscksp
  implicit none
#include finclude/petsckspdef.h

  PetscErrorCode :: ierr
  PetscInt   :: size,i,j,start,end,n=4,numsplit=1
  PetscScalar:: zero=0.0,one=1.0
  Vec:: diag3,x,b
  Mat:: A,subA(4),myS
  PC :: pc,subpc(2)
  KSP:: ksp,subksp(2)
  IS :: isg(2)

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)
  call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr); CHKERRQ(ierr);

  ! vectors
  call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); CHKERRQ(ierr)
  call VecSet(diag3,one,ierr); CHKERRQ(ierr)

  call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr)
  call VecSet(x,zero,ierr); CHKERRQ(ierr)

  call VecDuplicate(x,b,ierr); CHKERRQ(ierr)
  call VecSet(b,one,ierr); CHKERRQ(ierr)

  ! matrix a00
  call 
MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr)
  call MatDiagonalSet(subA(1),diag3,INSERT_VALUES,ierr);CHKERRQ(ierr)
  call MatAssemblyBegin(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(subA(1),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)

  ! matrix a01
  call 
MatCreateAIJ(MPI_COMM_WORLD,3*n,n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,1,PETSC_NULL_INTEGER,subA(2),ierr);CHKERRQ(ierr)
  call MatGetOwnershipRange(subA(2),start,end,ierr);CHKERRQ(ierr);
  do i=start,end-1
 j=mod(i,size*n)
 call MatSetValue(subA(2),i,j,one,INSERT_VALUES,ierr);CHKERRQ(ierr)
  end do
  call MatAssemblyBegin(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(subA(2),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)

  ! matrix a10
  call  MatTranspose(subA(2),MAT_INITIAL_MATRIX,subA(3),ierr);CHKERRQ(ierr)

  ! matrix a11 (empty)
  call 
MatCreateAIJ(MPI_COMM_WORLD,n,n,PETSC_DECIDE,PETSC_DECIDE,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(4),ierr);CHKERRQ(ierr)
  call MatAssemblyBegin(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(subA(4),MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)

  ! nested mat [a00,a01;a10,a11]
  call 
MatCreateNest(MPI_COMM_WORLD,2,PETSC_NULL_OBJECT,2,PETSC_NULL_OBJECT,subA,A,ierr);CHKERRQ(ierr)
  call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr);CHKERRQ(ierr)
  call MatNestGetISs(A,isg,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr);

  ! KSP and PC
  call KSPCreate(MPI_COMM_WORLD,ksp,ierr);CHKERRQ(ierr)
  call KSPSetOperators(ksp,A,A,ierr);CHKERRQ(ierr)
  call KSPSetType(ksp,KSPGCR,ierr);CHKERRQ(ierr)
  call KSPGetPC(ksp,pc,ierr);CHKERRQ(ierr)
  call PCSetType(pc,PCFIELDSPLIT,ierr);CHKERRQ(ierr)
  call PCFieldSplitSetType(pc,PC_COMPOSITE_SCHUR,ierr);CHKERRQ(ierr)
  call PCFieldSplitSetIS(pc,0,isg(1),ierr);CHKERRQ(ierr)
  call PCFieldSplitSetIS(pc,1,isg(2),ierr);CHKERRQ(ierr)
  call 
PCFieldSplitSetSchurFactType(pc,PC_FIELDSPLIT_SCHUR_FACT_LOWER,ierr);CHKERRQ(ierr)
  call 
PCFieldSplitSetSchurPre(pc,PC_FIELDSPLIT_SCHUR_PRE_SELFP,PETSC_NULL_OBJECT,ierr);CHKERRQ(ierr)
  call KSPSetUp(ksp,ierr);CHKERRQ(ierr);
  call PCFieldSplitGetSubKSP(pc,numsplit,subksp,ierr);CHKERRQ(ierr)
  call KSPSetType(subksp(1),KSPBICG,ierr);CHKERRQ(ierr)
  call KSPGetPC(subksp(1),subpc(1),ierr);CHKERRQ(ierr)
  call PCSetType(subpc(1),PCILU,ierr);CHKERRQ(ierr)
  call KSPSetType(subksp(2),KSPCG,ierr);CHKERRQ(ierr)
  call KSPGetPC(subksp(2),subpc(2),ierr);CHKERRQ(ierr)
  call PCSetType(subpc(2),PCICC,ierr);CHKERRQ(ierr)
!  call PetscFree(subksp);CHKERRQ(ierr);
  call KSPSetFromOptions(ksp,ierr);CHKERRQ(ierr)
  call KSPSolve(ksp,b,x,ierr);CHKERRQ(ierr)
  call KSPGetSolution(ksp,x,ierr);CHKERRQ(ierr)
!  call VecView(x,PETSC_VIEWER_STDOUT_WORLD,ierr);CHKERRQ(ierr)

  call PetscFinalize(ierr)

end program fieldsplittry


$ ./fieldsplittry -ksp_view
KSP Object: 1 MPI processes
  type: gcr
GCR: restart = 30
GCR: restarts performed = 1
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1
  right preconditioning
  using UNPRECONDITIONED norm 

Re: [petsc-users] fieldsplit_0_ monitor in combination with selfp

2014-09-05 Thread Klaij, Christiaan
Thanks! I've spotted another difference: you are setting the
fieldsplit_0_ksp_type and I'm not, just relying on the default
instead. If I add -fieldsplit_0_ksp_type gmres then is also get
the correct answer. Probably, you will get my problem if you
remove -fieldsplit_velocity.

mpiexec -n 2 ./ex70 -nx 4 -ny 6 \
-ksp_type fgmres \
-pc_type fieldsplit \
-pc_fieldsplit_type schur \
-pc_fieldsplit_schur_fact_type lower \
-pc_fieldsplit_schur_precondition selfp \
-fieldsplit_1_inner_ksp_type preonly \
-fieldsplit_1_inner_pc_type jacobi \
-fieldsplit_0_ksp_monitor -fieldsplit_0_ksp_max_it 1 \
-fieldsplit_1_ksp_monitor -fieldsplit_1_ksp_max_it 1 \
-ksp_monitor -ksp_max_it 1 \
-fieldsplit_0_ksp_type gmres -ksp_view


KSP Object: 2 MPI processes
  type: fgmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object: 2 MPI processes
  type: fieldsplit
FieldSplit with Schur preconditioner, factorization LOWER
Preconditioner for the Schur complement formed from Sp, an assembled 
approximation to S, which uses (the lumped) A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
  KSP Object:  (fieldsplit_0_)   2 MPI processes
type: gmres







MARIN news: Development of a Scaled-Down Floating Wind Turbine for Offshore 
Basin 
Testinghttp://www.marin.nl/web/News/News-items/Development-of-a-ScaledDown-Floating-Wind-Turbine-for-Offshore-Basin-Testing-1.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.com
Sent: Friday, September 05, 2014 2:10 PM
To: Klaij, Christiaan; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] fieldsplit_0_ monitor in combination with selfp

On Fri, Sep 5, 2014 at 1:34 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

I think the problem is somehow related to -pc_fieldsplit_schur_precondition 
selfp. In the example below your are not using that option.

Here is the selfp output. It retains the A00 solver.

ex62 -run_type full -refinement_limit 0.00625 -bc_type dirichlet -interpolate 1 
-vel_petscspace_order 2 -pres_petscspace_order 1 -ksp_type fgmres 
-ksp_gmres_restart 100 -ksp_rtol 1.0e-9 -pc_type fieldsplit -pc_fieldsplit_type 
schur -pc_fieldsplit_schur_factorization_type full 
-fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_velocity_ksp_type gmres 
-fieldsplit_velocity_pc_type lu -fieldsplit_pressure_pc_type jacobi 
-snes_monitor_short -ksp_monitor_short -snes_converged_reason 
-ksp_converged_reason -snes_view -show_solution 0 
-fieldsplit_pressure_inner_ksp_type preonly -fieldsplit_pressure_inner_pc_type 
jacobi -pc_fieldsplit_schur_precondition selfp

SNES Object: 1 MPI processes
  type: newtonls
  maximum iterations=50, maximum function evaluations=1
  tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
  total number of linear solver iterations=20
  total number of function evaluations=2
  SNESLineSearch Object:   1 MPI processes
type: bt
  interpolation: cubic
  alpha=1.00e-04
maxstep=1.00e+08, minlambda=1.00e-12
tolerances: relative=1.00e-08, absolute=1.00e-15, 
lambda=1.00e-08
maximum iterations=40
  KSP Object:   1 MPI processes
type: fgmres
  GMRES: restart=100, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
  GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances:  relative=1e-09, absolute=1e-50, divergence=1
right preconditioning
has attached null space
using UNPRECONDITIONED norm type for convergence test
  PC Object:   1 MPI processes
type: fieldsplit
  FieldSplit with Schur preconditioner, factorization FULL
  Preconditioner for the Schur complement formed from Sp, an assembled 
approximation to S, which uses (the lumped) A00's diagonal's inverse
  Split info:
  Split number 0 Defined by IS
  Split number 1 Defined by IS
  KSP solver for A00 block
KSP Object:(fieldsplit_velocity_) 1 MPI processes
  type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1
  left preconditioning
  using PRECONDITIONED norm type

Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-09-04 Thread Klaij, Christiaan
I did a make allclean, make and install. That solved the two
problems. So I'm guessing these values are taken from the petscksp
module or its predecessors instead of being taken directly from
the finclude/petscpc.h file. Case closed.

Chris

From: Matthew Knepley knep...@gmail.com
Sent: Wednesday, September 03, 2014 5:02 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 9:46 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

print *, PC_FIELDSPLIT_SCHUR_PRE_USER

gives: 2.

(changing to *_SELFP gives: This name does not have a type, and must have an 
explicit type.   [PC_FIELDSPLIT_SCHUR_PRE_SELFP])

I have

next:/PETSc3/petsc/petsc-pylith$ find include/finclude/ -type f | xargs grep 
USER
find include/finclude/ -type f | xargs grep USER
include/finclude//petscerrordef.h:#define PETSC_ERR_USER 83
include/finclude//petscpc.h:  PetscEnum PC_FIELDSPLIT_SCHUR_PRE_USER
include/finclude//petscpc.h:  parameter (PC_FIELDSPLIT_SCHUR_PRE_USER=3)
include/finclude//petsctao.h:  PetscEnum TAO_CONVERGED_USER
include/finclude//petsctao.h:  PetscEnum TAO_DIVERGED_USER
include/finclude//petsctao.h:  parameter ( TAO_CONVERGED_USER = 8)
include/finclude//petsctao.h:  parameter ( TAO_DIVERGED_USER = -8)

Where could it possibly be getting this value from? I think from some compiled 
module which you need to rebuild.

   Matt



MARIN news: MARIN Report 112: 
Hydro-structuralhttp://www.marin.nl/web/News/News-items/MARIN-Report-112-Hydrostructural.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Wednesday, September 03, 2014 4:44 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 9:43 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

I'm sorry, how do I do that?

print it

   Matt

Chris




MARIN news: Applied Hydrodynamics of Floating Offshore Structures course, Oct 8 
- 10, 
Houstonhttp://www.marin.nl/web/News/News-items/Applied-Hydrodynamics-of-Floating-Offshore-Structures-course-Oct-8-10-Houston.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Wednesday, September 03, 2014 4:38 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 9:23 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
Matt,

Thanks, after applying the fix to my petsc-3.5.1 install, the
small Fortran program works as expected.

Now, I would like to change the fortran strategy to the
option 3) Using Fortran modules. So, in the small fortran
program I replace these seven lines

#include finclude/petscsys.h
#include finclude/petscis.h
#include finclude/petscvec.h
#include finclude/petscmat.h
#include finclude/petscpc.h
#include finclude/petscksp.h
#include finclude/petscviewer.h

by the two following lines

use petscksp
#include finclude/petsckspdef.

This still compiles but I get the two old problem back...

I have no idea why. Can you look at the values of those enumerations?

  Matt

Chris


From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Wednesday, September 03, 2014 2:12 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 7:00 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Thanks for the fix. If I understand correctly, in an existing
install of petsc-3.5.1, I would only need to replace the
file finclude/petscpc.h by the new file for the fix to
work? (instead of downloading dev, configuring, installing on
various machines).

Yes

   Matt

Chris




MARIN news: Bas Buchner speaker at Lowpex conference at SMM 
Hamburghttp://www.marin.nl/web/News/News-items/Bas-Buchner-speaker-at-Lowpex-conference-at-SMM-Hamburg.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Tuesday, September 02, 2014 5:42 PM
To: Klaij, Christiaan
Cc: petsc-users

[petsc-users] fieldsplit_0_ monitor in combination with selfp

2014-09-04 Thread Klaij, Christiaan
I'm playing with the selfp option in fieldsplit using
snes/examples/tutorials/ex70.c. For example:

mpiexec -n 2 ./ex70 -nx 4 -ny 6 \
-ksp_type fgmres \
-pc_type fieldsplit \
-pc_fieldsplit_type schur \
-pc_fieldsplit_schur_fact_type lower \
-pc_fieldsplit_schur_precondition selfp \
-fieldsplit_1_inner_ksp_type preonly \
-fieldsplit_1_inner_pc_type jacobi \
-fieldsplit_0_ksp_monitor -fieldsplit_0_ksp_max_it 1 \
-fieldsplit_1_ksp_monitor -fieldsplit_1_ksp_max_it 1 \
-ksp_monitor -ksp_max_it 1

gives the following output

  0 KSP Residual norm 1.229687498638e+00
Residual norms for fieldsplit_1_ solve.
0 KSP Residual norm 2.330138480101e+01
1 KSP Residual norm 1.609000846751e+01
  1 KSP Residual norm 1.180287268335e+00

To my suprise I don't see anything for the fieldsplit_0_ solve,
why?

Furthermore, if I understand correctly the above should be
exactly equivalent with

mpiexec -n 2 ./ex70 -nx 4 -ny 6 \
-ksp_type fgmres \
-pc_type fieldsplit \
-pc_fieldsplit_type schur \
-pc_fieldsplit_schur_fact_type lower \
-user_ksp \
-fieldsplit_0_ksp_monitor -fieldsplit_0_ksp_max_it 1 \
-fieldsplit_1_ksp_monitor -fieldsplit_1_ksp_max_it 1 \
-ksp_monitor -ksp_max_it 1

  0 KSP Residual norm 1.229687498638e+00
Residual norms for fieldsplit_0_ solve.
0 KSP Residual norm 5.486639587672e-01
1 KSP Residual norm 6.348354253703e-02
Residual norms for fieldsplit_1_ solve.
0 KSP Residual norm 2.321938107977e+01
1 KSP Residual norm 1.605484031258e+01
  1 KSP Residual norm 1.183225251166e+00

because -user_ksp replaces the Schur complement by the simple
approximation A11 - A10 inv(diag(A00)) A01. Beside the missing
fielsplit_0_ part, the numbers are pretty close but not exactly
the same. Any explanation?

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



Re: [petsc-users] fieldsplit_0_ monitor in combination with selfp

2014-09-04 Thread Klaij, Christiaan
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1
    left preconditioning
    using NONE norm type for convergence test
  PC Object:  (fieldsplit_1_sub_sub_)   1 
MPI processes
    type: ilu
  ILU: out-of-place factorization
  0 levels of fill
  tolerance for zero pivot 2.22045e-14
  using diagonal shift on blocks to prevent zero pivot 
[INBLOCKS]
  matrix ordering: natural
  factor fill ratio given 1, needed 1
    Factored matrix follows:
  Mat Object:   1 MPI processes
    type: seqaij
    rows=24, cols=24
    package used to perform factorization: petsc
    total: nonzeros=120, allocated nonzeros=120
    total number of mallocs used during MatSetValues calls 
=0
  not using I-node routines
    linear system matrix = precond matrix:
    Mat Object: 1 MPI processes
  type: seqaij
  rows=24, cols=24
  total: nonzeros=120, allocated nonzeros=120
  total number of mallocs used during MatSetValues calls =0
    not using I-node routines
    linear system matrix = precond matrix:
    Mat Object: 1 MPI processes
  type: mpiaij
  rows=24, cols=24
  total: nonzeros=120, allocated nonzeros=120
  total number of mallocs used during MatSetValues calls =0
    not using I-node (on process 0) routines
    linear system matrix followed by preconditioner matrix:
    Mat Object:    (fieldsplit_1_) 1 MPI processes
  type: schurcomplement
  rows=24, cols=24
    Schur complement A11 - A10 inv(A00) A01
    A11
  Mat Object:  (fieldsplit_1_)   1 MPI 
processes
    type: mpiaij
    rows=24, cols=24
    total: nonzeros=0, allocated nonzeros=0
    total number of mallocs used during MatSetValues calls =0
  using I-node (on process 0) routines: found 5 nodes, limit 
used is 5
    A10
  Mat Object:  (a10_)   1 MPI processes
    type: mpiaij
    rows=24, cols=48
    total: nonzeros=96, allocated nonzeros=96
    total number of mallocs used during MatSetValues calls =0
  not using I-node (on process 0) routines
    KSP of A00
  KSP Object:  (fieldsplit_1_inner_)   1 
MPI processes
    type: preonly
    maximum iterations=1, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=1
    left preconditioning
    using NONE norm type for convergence test
  PC Object:  (fieldsplit_1_inner_)   1 MPI 
processes
    type: jacobi
    linear system matrix = precond matrix:
    Mat Object:    (fieldsplit_0_) 1 
MPI processes
  type: mpiaij
  rows=48, cols=48
  total: nonzeros=200, allocated nonzeros=480
  total number of mallocs used during MatSetValues calls =0
    not using I-node (on process 0) routines
    A01
  Mat Object:  (a01_)   1 MPI processes
    type: mpiaij
    rows=48, cols=24
    total: nonzeros=96, allocated nonzeros=480
    total number of mallocs used during MatSetValues calls =0
  not using I-node (on process 0) routines
    Mat Object: 1 MPI processes
  type: mpiaij
  rows=24, cols=24
  total: nonzeros=120, allocated nonzeros=120
  total number of mallocs used during MatSetValues calls =0
    not using I-node (on process 0) routines
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
    type: nest
    rows=72, cols=72
  Matrix object: 
    type=nest, rows=2, cols=2 
    MatNest structure: 
    (0,0) : prefix=fieldsplit_0_, type=mpiaij, rows=48, cols=48 
    (0,1) : prefix=a01_, type=mpiaij, rows=48, cols=24 
    (1,0) : prefix=a10_, type=mpiaij, rows=24, cols=48 
    (1,1) : prefix=fieldsplit_1_, type=mpiaij, rows=24, cols=24


From: Matthew Knepley knep...@gmail.com
Sent: Thursday, September 04, 2014 2:20 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] fieldsplit_0_ monitor in combination with selfp
  



On Thu, Sep 4, 2014 at 7:06 AM, Klaij, Christiaan  c.kl

[petsc-users] suggestion for PCFieldSplitGetSubKSP

2014-09-04 Thread Klaij, Christiaan
I'm using PCFieldSplitGetSubKSP to access and modify the various
KSP's that occur in a Schur preconditioner. Now,
PCFieldSplitGetSubKSP returns two KSP's: one corresponding to A00
and one corresponding to the Schur complement.

However, in the LDU factorization, ksp(A00) occurs 4 times: in
the upper block, in the diagonal block, in the Schur complement
of the diagonal block and in the lower block.

So far, I'm using the runtime option -fieldsplit_1_inner to
access the ksp(A00) in the Schur complement, but it would make
sence if PCFieldSplitGetSubKSP returns all 5 KSP's involved. Or
is there another subroutine that I could call?

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-09-03 Thread Klaij, Christiaan
Matt,

Thanks for the fix. If I understand correctly, in an existing
install of petsc-3.5.1, I would only need to replace the
file finclude/petscpc.h by the new file for the fix to
work? (instead of downloading dev, configuring, installing on
various machines).

Chris




MARIN news: Bas Buchner speaker at Lowpex conference at SMM 
Hamburghttp://www.marin.nl/web/News/News-items/Bas-Buchner-speaker-at-Lowpex-conference-at-SMM-Hamburg.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.com
Sent: Tuesday, September 02, 2014 5:42 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Tue, Sep 2, 2014 at 2:08 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Attached is a small Fortran code that replicates the second problem.

This was a Fortran define problem. I fixed it on next

  https://bitbucket.org/petsc/petsc/branch/knepley/fix-pc-fieldsplit-fortran

and it will be in maint and master tomorrow.

  Thanks,

 Matt

Chris


[cid:image76463f.JPG@e74baedb.4a899186][cid:imageae8e3c.JPG@1f0a6dd3.4e92756a]

dr. ir. Christiaan Klaij


CFD Researcher


Research  Development




MARIN




2, Haagsteeg

E c.kl...@marin.nlmailto:c.kl...@marin.nl P.O. Box 28 T +31 317 49 39 
11tel:%2B31%20317%2049%2039%2011
6700 AA Wageningen  F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045
T  +31 317 49 33 44tel:%2B31%20317%2049%2033%2044 The Netherlands I  
www.marin.nlhttp://www.marin.nl



MARIN news: MARIN at SMM, Hamburg, September 
9-12http://www.marin.nl/web/News/News-items/MARIN-at-SMM-Hamburg-September-912.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Klaij, Christiaan
Sent: Friday, August 29, 2014 4:42 PM
To: Matthew Knepley
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: RE: [petsc-users] PCFieldSplitSetSchurPre in fortran

Matt,

The small test code (ex70) is in C and it works fine, the problem
happens in a big Fortran code. I will try to replicate the
problem in a small Fortran code, but that will take some time.

Chris


From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Friday, August 29, 2014 4:14 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Fri, Aug 29, 2014 at 8:55 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
I'm trying PCFieldSplitSetSchurPre with
PC_FIELDSPLIT_SCHUR_PRE_SELFP in petsc-3.5.1 using fortran.

The first problem is that PC_FIELDSPLIT_SCHUR_PRE_SELFP seems to
be missing in fortran, I get the compile error:

This name does not have a type, and must have an explicit type.   
[PC_FIELDSPLIT_SCHUR_PRE_SELFP]

while compilation works fine with _A11, _SELF and _USER.

Mark Adams has just fixed this.

The second problem is that the call doesn't seem to have any
effect. For example, I have

CALL PCFieldSplitSetSchurPre(pc,PC_FIELDSPLIT_SCHUR_PRE_USER,aa,ierr)
CALL PCFieldSplitSetSchurFactType(pc,PC_FIELDSPLIT_SCHUR_FACT_LOWER,ierr)

This compiles and runs, but ksp_view tells me

PC Object:(sys_) 3 MPI processes
  type: fieldsplit
FieldSplit with Schur preconditioner, factorization LOWER
Preconditioner for the Schur complement formed from A11

So changing the factorization from the default FULL to LOWER did
work, but changing the preconditioner from A11 to USER didn't.

I've also tried to run directly from the command line using

-sys_pc_fieldsplit_schur_precondition user -sys_ksp_view

This works in the sense that I don't get the WARNING! There are
options you set that were not used! message, but still ksp_view
reports A11 instead of user provided matrix.

Can you send a small test code, since I use this everyday here and it works.

  Thanks,

 Matt

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nlmailto:c.kl...@marin.nl
T +31 317 49 33 44tel:%2B31%20317%2049%2033%2044


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11tel:%2B31%20317%2049%2039%2011, F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045, I www.marin.nlhttp://www.marin.nl




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments

Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-09-03 Thread Klaij, Christiaan
Matt,

Thanks, after applying the fix to my petsc-3.5.1 install, the
small Fortran program works as expected.

Now, I would like to change the fortran strategy to the
option 3) Using Fortran modules. So, in the small fortran
program I replace these seven lines

#include finclude/petscsys.h
#include finclude/petscis.h
#include finclude/petscvec.h
#include finclude/petscmat.h
#include finclude/petscpc.h
#include finclude/petscksp.h
#include finclude/petscviewer.h

by the two following lines

use petscksp
#include finclude/petsckspdef.

This still compiles but I get the two old problem back...

Chris


From: Matthew Knepley knep...@gmail.com
Sent: Wednesday, September 03, 2014 2:12 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 7:00 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Thanks for the fix. If I understand correctly, in an existing
install of petsc-3.5.1, I would only need to replace the
file finclude/petscpc.h by the new file for the fix to
work? (instead of downloading dev, configuring, installing on
various machines).

Yes

   Matt

Chris




MARIN news: Bas Buchner speaker at Lowpex conference at SMM 
Hamburghttp://www.marin.nl/web/News/News-items/Bas-Buchner-speaker-at-Lowpex-conference-at-SMM-Hamburg.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Tuesday, September 02, 2014 5:42 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Tue, Sep 2, 2014 at 2:08 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Attached is a small Fortran code that replicates the second problem.

This was a Fortran define problem. I fixed it on next

  https://bitbucket.org/petsc/petsc/branch/knepley/fix-pc-fieldsplit-fortran

and it will be in maint and master tomorrow.

  Thanks,

 Matt

Chris


[cid:image76463f.JPG@e74baedb.4a899186][cid:imageae8e3c.JPG@1f0a6dd3.4e92756a]

dr. ir. Christiaan Klaij


CFD Researcher


Research  Development




MARIN




2, Haagsteeg

E c.kl...@marin.nlmailto:c.kl...@marin.nl P.O. Box 28 T +31 317 49 39 
11tel:%2B31%20317%2049%2039%2011
6700 AA Wageningen  F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045
T  +31 317 49 33 44tel:%2B31%20317%2049%2033%2044 The Netherlands I  
www.marin.nlhttp://www.marin.nl



MARIN news: MARIN at SMM, Hamburg, September 
9-12http://www.marin.nl/web/News/News-items/MARIN-at-SMM-Hamburg-September-912.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Klaij, Christiaan
Sent: Friday, August 29, 2014 4:42 PM
To: Matthew Knepley
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: RE: [petsc-users] PCFieldSplitSetSchurPre in fortran

Matt,

The small test code (ex70) is in C and it works fine, the problem
happens in a big Fortran code. I will try to replicate the
problem in a small Fortran code, but that will take some time.

Chris


From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Friday, August 29, 2014 4:14 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Fri, Aug 29, 2014 at 8:55 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
I'm trying PCFieldSplitSetSchurPre with
PC_FIELDSPLIT_SCHUR_PRE_SELFP in petsc-3.5.1 using fortran.

The first problem is that PC_FIELDSPLIT_SCHUR_PRE_SELFP seems to
be missing in fortran, I get the compile error:

This name does not have a type, and must have an explicit type.   
[PC_FIELDSPLIT_SCHUR_PRE_SELFP]

while compilation works fine with _A11, _SELF and _USER.

Mark Adams has just fixed this.

The second problem is that the call doesn't seem to have any
effect. For example, I have

CALL PCFieldSplitSetSchurPre(pc,PC_FIELDSPLIT_SCHUR_PRE_USER,aa,ierr)
CALL PCFieldSplitSetSchurFactType(pc,PC_FIELDSPLIT_SCHUR_FACT_LOWER,ierr)

This compiles and runs, but ksp_view tells me

PC Object:(sys_) 3 MPI processes
  type: fieldsplit
FieldSplit with Schur preconditioner, factorization LOWER
Preconditioner for the Schur complement formed from A11

So changing the factorization from the default FULL to LOWER did
work, but changing the preconditioner from A11 to USER didn't.

I've also tried to run directly from the command line using

-sys_pc_fieldsplit_schur_precondition user

Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-09-03 Thread Klaij, Christiaan
I'm sorry, how do I do that?

Chris




MARIN news: Applied Hydrodynamics of Floating Offshore Structures course, Oct 8 
- 10, 
Houstonhttp://www.marin.nl/web/News/News-items/Applied-Hydrodynamics-of-Floating-Offshore-Structures-course-Oct-8-10-Houston.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.com
Sent: Wednesday, September 03, 2014 4:38 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 9:23 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
Matt,

Thanks, after applying the fix to my petsc-3.5.1 install, the
small Fortran program works as expected.

Now, I would like to change the fortran strategy to the
option 3) Using Fortran modules. So, in the small fortran
program I replace these seven lines

#include finclude/petscsys.h
#include finclude/petscis.h
#include finclude/petscvec.h
#include finclude/petscmat.h
#include finclude/petscpc.h
#include finclude/petscksp.h
#include finclude/petscviewer.h

by the two following lines

use petscksp
#include finclude/petsckspdef.

This still compiles but I get the two old problem back...

I have no idea why. Can you look at the values of those enumerations?

  Matt

Chris


From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Wednesday, September 03, 2014 2:12 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 7:00 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Thanks for the fix. If I understand correctly, in an existing
install of petsc-3.5.1, I would only need to replace the
file finclude/petscpc.h by the new file for the fix to
work? (instead of downloading dev, configuring, installing on
various machines).

Yes

   Matt

Chris




MARIN news: Bas Buchner speaker at Lowpex conference at SMM 
Hamburghttp://www.marin.nl/web/News/News-items/Bas-Buchner-speaker-at-Lowpex-conference-at-SMM-Hamburg.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Tuesday, September 02, 2014 5:42 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Tue, Sep 2, 2014 at 2:08 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Attached is a small Fortran code that replicates the second problem.

This was a Fortran define problem. I fixed it on next

  https://bitbucket.org/petsc/petsc/branch/knepley/fix-pc-fieldsplit-fortran

and it will be in maint and master tomorrow.

  Thanks,

 Matt

Chris


[cid:image76463f.JPG@e74baedb.4a899186][cid:imageae8e3c.JPG@1f0a6dd3.4e92756a]

dr. ir. Christiaan Klaij


CFD Researcher


Research  Development




MARIN




2, Haagsteeg

E c.kl...@marin.nlmailto:c.kl...@marin.nl P.O. Box 28 T +31 317 49 39 
11tel:%2B31%20317%2049%2039%2011
6700 AA Wageningen  F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045
T  +31 317 49 33 44tel:%2B31%20317%2049%2033%2044 The Netherlands I  
www.marin.nlhttp://www.marin.nl



MARIN news: MARIN at SMM, Hamburg, September 
9-12http://www.marin.nl/web/News/News-items/MARIN-at-SMM-Hamburg-September-912.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Klaij, Christiaan
Sent: Friday, August 29, 2014 4:42 PM
To: Matthew Knepley
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: RE: [petsc-users] PCFieldSplitSetSchurPre in fortran

Matt,

The small test code (ex70) is in C and it works fine, the problem
happens in a big Fortran code. I will try to replicate the
problem in a small Fortran code, but that will take some time.

Chris


From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Friday, August 29, 2014 4:14 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Fri, Aug 29, 2014 at 8:55 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
I'm trying PCFieldSplitSetSchurPre with
PC_FIELDSPLIT_SCHUR_PRE_SELFP in petsc-3.5.1 using fortran.

The first problem is that PC_FIELDSPLIT_SCHUR_PRE_SELFP seems

Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-09-03 Thread Klaij, Christiaan
print *, PC_FIELDSPLIT_SCHUR_PRE_USER

gives: 2.

(changing to *_SELFP gives: This name does not have a type, and must have an 
explicit type.   [PC_FIELDSPLIT_SCHUR_PRE_SELFP])





MARIN news: MARIN Report 112: 
Hydro-structuralhttp://www.marin.nl/web/News/News-items/MARIN-Report-112-Hydrostructural.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.com
Sent: Wednesday, September 03, 2014 4:44 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 9:43 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

I'm sorry, how do I do that?

print it

   Matt

Chris




MARIN news: Applied Hydrodynamics of Floating Offshore Structures course, Oct 8 
- 10, 
Houstonhttp://www.marin.nl/web/News/News-items/Applied-Hydrodynamics-of-Floating-Offshore-Structures-course-Oct-8-10-Houston.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Wednesday, September 03, 2014 4:38 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 9:23 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
Matt,

Thanks, after applying the fix to my petsc-3.5.1 install, the
small Fortran program works as expected.

Now, I would like to change the fortran strategy to the
option 3) Using Fortran modules. So, in the small fortran
program I replace these seven lines

#include finclude/petscsys.h
#include finclude/petscis.h
#include finclude/petscvec.h
#include finclude/petscmat.h
#include finclude/petscpc.h
#include finclude/petscksp.h
#include finclude/petscviewer.h

by the two following lines

use petscksp
#include finclude/petsckspdef.

This still compiles but I get the two old problem back...

I have no idea why. Can you look at the values of those enumerations?

  Matt

Chris


From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Wednesday, September 03, 2014 2:12 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Wed, Sep 3, 2014 at 7:00 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Thanks for the fix. If I understand correctly, in an existing
install of petsc-3.5.1, I would only need to replace the
file finclude/petscpc.h by the new file for the fix to
work? (instead of downloading dev, configuring, installing on
various machines).

Yes

   Matt

Chris




MARIN news: Bas Buchner speaker at Lowpex conference at SMM 
Hamburghttp://www.marin.nl/web/News/News-items/Bas-Buchner-speaker-at-Lowpex-conference-at-SMM-Hamburg.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.commailto:knep...@gmail.com
Sent: Tuesday, September 02, 2014 5:42 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.govmailto:petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Tue, Sep 2, 2014 at 2:08 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:

Matt,

Attached is a small Fortran code that replicates the second problem.

This was a Fortran define problem. I fixed it on next

  https://bitbucket.org/petsc/petsc/branch/knepley/fix-pc-fieldsplit-fortran

and it will be in maint and master tomorrow.

  Thanks,

 Matt

Chris


[cid:image76463f.JPG@e74baedb.4a899186][cid:imageae8e3c.JPG@1f0a6dd3.4e92756a]

dr. ir. Christiaan Klaij


CFD Researcher


Research  Development




MARIN




2, Haagsteeg

E c.kl...@marin.nlmailto:c.kl...@marin.nl P.O. Box 28 T +31 317 49 39 
11tel:%2B31%20317%2049%2039%2011
6700 AA Wageningen  F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045
T  +31 317 49 33 44tel:%2B31%20317%2049%2033%2044 The Netherlands I  
www.marin.nlhttp://www.marin.nl



MARIN news: MARIN at SMM, Hamburg, September 
9-12http://www.marin.nl/web/News/News-items/MARIN-at-SMM-Hamburg-September-912.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Klaij, Christiaan
Sent: Friday

Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-09-02 Thread Klaij, Christiaan
Matt,

Attached is a small Fortran code that replicates the second problem.

Chris


[cid:image76463f.JPG@e74baedb.4a899186][cid:imageae8e3c.JPG@1f0a6dd3.4e92756a]

dr. ir. Christiaan Klaij


CFD Researcher


Research  Development




MARIN




2, Haagsteeg

E c.kl...@marin.nlmailto:c.kl...@marin.nl P.O. Box 28 T +31 317 49 39 
11
6700 AA Wageningen  F +31 317 49 32 45
T  +31 317 49 33 44 The Netherlands I  www.marin.nlhttp://www.marin.nl



MARIN news: MARIN at SMM, Hamburg, September 
9-12http://www.marin.nl/web/News/News-items/MARIN-at-SMM-Hamburg-September-912.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Klaij, Christiaan
Sent: Friday, August 29, 2014 4:42 PM
To: Matthew Knepley
Cc: petsc-users@mcs.anl.gov
Subject: RE: [petsc-users] PCFieldSplitSetSchurPre in fortran

Matt,

The small test code (ex70) is in C and it works fine, the problem
happens in a big Fortran code. I will try to replicate the
problem in a small Fortran code, but that will take some time.

Chris


From: Matthew Knepley knep...@gmail.com
Sent: Friday, August 29, 2014 4:14 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Fri, Aug 29, 2014 at 8:55 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
I'm trying PCFieldSplitSetSchurPre with
PC_FIELDSPLIT_SCHUR_PRE_SELFP in petsc-3.5.1 using fortran.

The first problem is that PC_FIELDSPLIT_SCHUR_PRE_SELFP seems to
be missing in fortran, I get the compile error:

This name does not have a type, and must have an explicit type.   
[PC_FIELDSPLIT_SCHUR_PRE_SELFP]

while compilation works fine with _A11, _SELF and _USER.

Mark Adams has just fixed this.

The second problem is that the call doesn't seem to have any
effect. For example, I have

CALL PCFieldSplitSetSchurPre(pc,PC_FIELDSPLIT_SCHUR_PRE_USER,aa,ierr)
CALL PCFieldSplitSetSchurFactType(pc,PC_FIELDSPLIT_SCHUR_FACT_LOWER,ierr)

This compiles and runs, but ksp_view tells me

PC Object:(sys_) 3 MPI processes
  type: fieldsplit
FieldSplit with Schur preconditioner, factorization LOWER
Preconditioner for the Schur complement formed from A11

So changing the factorization from the default FULL to LOWER did
work, but changing the preconditioner from A11 to USER didn't.

I've also tried to run directly from the command line using

-sys_pc_fieldsplit_schur_precondition user -sys_ksp_view

This works in the sense that I don't get the WARNING! There are
options you set that were not used! message, but still ksp_view
reports A11 instead of user provided matrix.

Can you send a small test code, since I use this everyday here and it works.

  Thanks,

 Matt

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nlmailto:c.kl...@marin.nl
T +31 317 49 33 44tel:%2B31%20317%2049%2033%2044


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11tel:%2B31%20317%2049%2039%2011, F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045, I www.marin.nlhttp://www.marin.nl




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener


! running with ./fieldsplitbug -ksp_view shows:
!
!PC Object: 1 MPI processes
!  type: fieldsplit
!FieldSplit with Schur preconditioner, factorization LOWER
!Preconditioner for the Schur complement formed from A11
!
! despite the call to PCFieldSplitSetSchurPre.

program fieldsplitbug

  implicit none
#include finclude/petscsys.h
#include finclude/petscis.h
#include finclude/petscvec.h
#include finclude/petscmat.h
#include finclude/petscpc.h
#include finclude/petscksp.h
#include finclude/petscviewer.h

  PetscErrorCode :: ierr
  PetscInt   :: n=4, numsplit=1
  PetscScalar:: zero=0.0, one=1.0
  Vec:: diag1,diag3,x,b
  Mat:: A, subA(4), myS
  PC :: pc
  KSP:: ksp,subksp(2)
  IS :: isg(2)

  call PetscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr)

  ! vectors
  call VecCreateMPI(MPI_COMM_WORLD,n,PETSC_DECIDE,diag1,ierr); CHKERRQ(ierr)
  call VecSet(diag1,one,ierr); CHKERRQ(ierr)

  call VecCreateMPI(MPI_COMM_WORLD,3*n,PETSC_DECIDE,diag3,ierr); CHKERRQ(ierr)
  call VecSet(diag3,one,ierr); CHKERRQ(ierr)

  call VecCreateMPI(MPI_COMM_WORLD,4*n,PETSC_DECIDE,x,ierr); CHKERRQ(ierr)
  call VecSet(x,zero,ierr); CHKERRQ(ierr)

  call VecDuplicate(x,b,ierr); CHKERRQ(ierr)
  call VecSet(b,one,ierr); CHKERRQ(ierr)

  ! matrix a00
  call MatCreateAIJ(MPI_COMM_WORLD,3*n,3*n,PETSC_DECIDE,PETSC_DECIDE,1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,subA(1),ierr);CHKERRQ(ierr)
  call MatDiagonalSet(subA(1),diag3,INSERT_VALUES

Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

2014-08-29 Thread Klaij, Christiaan
Matt,

The small test code (ex70) is in C and it works fine, the problem
happens in a big Fortran code. I will try to replicate the
problem in a small Fortran code, but that will take some time.

Chris


[cid:image49f99d.JPG@47b8dbea.4cb89460][cid:imagef623f4.JPG@ef0158cd.488b8eaa]

dr. ir. Christiaan Klaij


CFD Researcher


Research  Development




MARIN




2, Haagsteeg

E c.kl...@marin.nlmailto:c.kl...@marin.nl P.O. Box 28 T +31 317 49 39 
11
6700 AA Wageningen  F +31 317 49 32 45
T  +31 317 49 33 44 The Netherlands I  www.marin.nlhttp://www.marin.nl



MARIN news: Bas Buchner speaker at Lowpex conference at SMM 
Hamburghttp://www.marin.nl/web/News/News-items/Bas-Buchner-speaker-at-Lowpex-conference-at-SMM-Hamburg.htm

This e-mail may be confidential, privileged and/or protected by copyright. If 
you are not the intended recipient, you should return it to the sender 
immediately and delete your copy from your system.




From: Matthew Knepley knep...@gmail.com
Sent: Friday, August 29, 2014 4:14 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PCFieldSplitSetSchurPre in fortran

On Fri, Aug 29, 2014 at 8:55 AM, Klaij, Christiaan 
c.kl...@marin.nlmailto:c.kl...@marin.nl wrote:
I'm trying PCFieldSplitSetSchurPre with
PC_FIELDSPLIT_SCHUR_PRE_SELFP in petsc-3.5.1 using fortran.

The first problem is that PC_FIELDSPLIT_SCHUR_PRE_SELFP seems to
be missing in fortran, I get the compile error:

This name does not have a type, and must have an explicit type.   
[PC_FIELDSPLIT_SCHUR_PRE_SELFP]

while compilation works fine with _A11, _SELF and _USER.

Mark Adams has just fixed this.

The second problem is that the call doesn't seem to have any
effect. For example, I have

CALL PCFieldSplitSetSchurPre(pc,PC_FIELDSPLIT_SCHUR_PRE_USER,aa,ierr)
CALL PCFieldSplitSetSchurFactType(pc,PC_FIELDSPLIT_SCHUR_FACT_LOWER,ierr)

This compiles and runs, but ksp_view tells me

PC Object:(sys_) 3 MPI processes
  type: fieldsplit
FieldSplit with Schur preconditioner, factorization LOWER
Preconditioner for the Schur complement formed from A11

So changing the factorization from the default FULL to LOWER did
work, but changing the preconditioner from A11 to USER didn't.

I've also tried to run directly from the command line using

-sys_pc_fieldsplit_schur_precondition user -sys_ksp_view

This works in the sense that I don't get the WARNING! There are
options you set that were not used! message, but still ksp_view
reports A11 instead of user provided matrix.

Can you send a small test code, since I use this everyday here and it works.

  Thanks,

 Matt

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nlmailto:c.kl...@marin.nl
T +31 317 49 33 44tel:%2B31%20317%2049%2033%2044


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11tel:%2B31%20317%2049%2039%2011, F +31 317 49 32 
45tel:%2B31%20317%2049%2032%2045, I www.marin.nlhttp://www.marin.nl




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener




Re: [petsc-users] MatNestGetISs in fortran

2014-08-19 Thread Klaij, Christiaan
MatNestGetISs works as expected in fortran with petsc-3.5.1. Thanks!

Chris

From: Satish Balay ba...@mcs.anl.gov
Sent: Monday, July 07, 2014 6:37 PM
To: Klaij, Christiaan
Cc: petsc-users
Subject: Re: [petsc-users] MatNestGetISs in fortran

My change is in petsc-3.5. You can upgrade and see if it works for you..

Satish

On Mon, 7 Jul 2014, Klaij, Christiaan wrote:

 Satish,

 Thanks for your reply! I've been on holiday for some time, do you still want 
 me to test this or has it been fixed with the 3.5 release?

 Chris

 
 From: Satish Balay ba...@mcs.anl.gov
 Sent: Monday, June 16, 2014 5:41 PM
 To: Klaij, Christiaan
 Cc: petsc-users@mcs.anl.gov
 Subject: Re: [petsc-users] MatNestGetISs in fortran

 perhaps this routine does not need custom fortran interface.

 Does the attached src/mat/impls/nest/ftn-auto/matnestf.c work [with 
 petsc-3.4]?

 If so - I'll add this to  petsc dev [master]

 thanks,
 Satish

 On Fri, 13 Jun 2014, Klaij, Christiaan wrote:

  Perhaps this message from May 27 slipped through the email cracks as Matt 
  puts it?
 
  Chris
 
 
  dr. ir. Christiaan Klaij
  CFD Researcher
  Research  Development
  E mailto:c.kl...@marin.nl
  T +31 317 49 33 44
 
 
  MARIN
  2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
  T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
 
  
  From: Klaij, Christiaan
  Sent: Monday, June 02, 2014 9:54 AM
  To: petsc-users@mcs.anl.gov
  Subject: RE: MatNestGetISs in fortran
 
  Just a reminder. Could you please add fortran support for MatNestGetISs?
  
  From: Klaij, Christiaan
  Sent: Tuesday, May 27, 2014 3:47 PM
  To: petsc-users@mcs.anl.gov
  Subject: MatNestGetISs in fortran
 
  I'm trying to use MatNestGetISs in a fortran program but it seems to be 
  missing from the fortran include file (PETSc 3.4).
 
 




Re: [petsc-users] why a certain option cannot be used

2014-07-17 Thread Klaij, Christiaan
Just for future reference, below are the correct comments that
should be at the top of ex70.

(Jed, Matt, I sent these to the maintainer list a long time
ago ([petsc-maint #155997]), but you never put them in, it would
avoid a lot of confusion)

/* Poiseuille flow problem.   */
/**/
/* Viscous, laminar flow in a 2D channel with parabolic velocity  */
/* profile and linear pressure drop, exact solution of the 2D Stokes  */
/* equations. */
/**/
/* Discretized with the cell-centered finite-volume method on a   */
/* Cartesian grid with co-located variables. Variables ordered as */
/* [u1...uN v1...vN p1...pN]^T. Matrix [A00 A01; A10, A11] solved with*/
/* PCFIELDSPLIT. Lower factorization is used to mimick the Semi-Implicit  */
/* Method for Pressure Linked Equations (SIMPLE) used as preconditioner   */
/* instead of solver. */
/**/
/* Disclaimer: does not contain the pressure-weighed interpolation*/
/* method needed to suppress spurious pressure modes in real-life */
/* problems.  */
/**/
/* usage: */
/**/
/* mpiexec -n 2 ./ex70 -nx 32 -ny 48 -ksp_type fgmres -pc_type fieldsplit 
-pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower 
-fieldsplit_1_pc_type none
/**/
/*   Runs with PCFIELDSPLIT on 32x48 grid, no PC for the Schur*/
/*   complement because A11 is zero. FGMRES is needed because */
/*   PCFIELDSPLIT is a variable preconditioner.   */
/**/
/* mpiexec -n 2 ./ex70 -nx 32 -ny 48 -ksp_type fgmres -pc_type fieldsplit 
-pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower -user_pc
/**/
/*   Same as above but with user defined PC for the true Schur*/
/*   complement. PC based on the SIMPLE-type approximation (inverse of*/
/*   A00 approximated by inverse of its diagonal).*/
/**/
/* mpiexec -n 2 ./ex70 -nx 32 -ny 48 -ksp_type fgmres -pc_type fieldsplit 
-pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower -user_ksp
/**/
/*   Replace the true Schur complement with a user defined Schur  */
/*   complement based on the SIMPLE-type approximation. Same matrix is*/
/*   used as PC.  */
/**/
/* mpiexec -n 2 ./ex70 -nx 32 -ny 48 -ksp_type fgmres -pc_type fieldsplit 
-pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower 
-fieldsplit_0_ksp_type gmres -fieldsplit_0_pc_type bjacobi 
-fieldsplit_1_pc_type jacobi -fieldsplit_1_inner_ksp_type preonly 
-fieldsplit_1_inner_pc_type jacobi -fieldsplit_1_upper_ksp_type preonly 
-fieldsplit_1_upper_pc_type jacobi
/**/
/*   Out-of-the-box SIMPLE-type preconditioning. The major advantage  */
/*   is that the user neither needs to provide the approximation of   */
/*   the Schur complement, nor the corresponding preconditioner.



dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



Re: [petsc-users] MatNestGetISs in fortran

2014-07-07 Thread Klaij, Christiaan
Satish,

Thanks for your reply! I've been on holiday for some time, do you still want me 
to test this or has it been fixed with the 3.5 release?

Chris


From: Satish Balay ba...@mcs.anl.gov
Sent: Monday, June 16, 2014 5:41 PM
To: Klaij, Christiaan
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] MatNestGetISs in fortran

perhaps this routine does not need custom fortran interface.

Does the attached src/mat/impls/nest/ftn-auto/matnestf.c work [with petsc-3.4]?

If so - I'll add this to  petsc dev [master]

thanks,
Satish

On Fri, 13 Jun 2014, Klaij, Christiaan wrote:

 Perhaps this message from May 27 slipped through the email cracks as Matt 
 puts it?

 Chris


 dr. ir. Christiaan Klaij
 CFD Researcher
 Research  Development
 E mailto:c.kl...@marin.nl
 T +31 317 49 33 44


 MARIN
 2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
 T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl

 
 From: Klaij, Christiaan
 Sent: Monday, June 02, 2014 9:54 AM
 To: petsc-users@mcs.anl.gov
 Subject: RE: MatNestGetISs in fortran

 Just a reminder. Could you please add fortran support for MatNestGetISs?
 
 From: Klaij, Christiaan
 Sent: Tuesday, May 27, 2014 3:47 PM
 To: petsc-users@mcs.anl.gov
 Subject: MatNestGetISs in fortran

 I'm trying to use MatNestGetISs in a fortran program but it seems to be 
 missing from the fortran include file (PETSc 3.4).




Re: [petsc-users] MatNestGetISs in fortran

2014-06-13 Thread Klaij, Christiaan
Perhaps this message from May 27 slipped through the email cracks as Matt 
puts it?

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl


From: Klaij, Christiaan
Sent: Monday, June 02, 2014 9:54 AM
To: petsc-users@mcs.anl.gov
Subject: RE: MatNestGetISs in fortran

Just a reminder. Could you please add fortran support for MatNestGetISs?

From: Klaij, Christiaan
Sent: Tuesday, May 27, 2014 3:47 PM
To: petsc-users@mcs.anl.gov
Subject: MatNestGetISs in fortran

I'm trying to use MatNestGetISs in a fortran program but it seems to be missing 
from the fortran include file (PETSc 3.4).



Re: [petsc-users] MatNestGetISs in fortran

2014-06-02 Thread Klaij, Christiaan
Just a reminder. Could you please add fortran support for MatNestGetISs?


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl


From: Klaij, Christiaan
Sent: Tuesday, May 27, 2014 3:47 PM
To: petsc-users@mcs.anl.gov
Subject: MatNestGetISs in fortran

I'm trying to use MatNestGetISs in a fortran program but it seems to be missing 
from the fortran include file (PETSc 3.4).



[petsc-users] MatNestGetISs in fortran

2014-05-27 Thread Klaij, Christiaan
I'm trying to use MatNestGetISs in a fortran program but it seems to be missing 
from the fortran include file (PETSc 3.4).



dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] petsc 3.4, mat_view and prefix problem

2014-05-14 Thread Klaij, Christiaan
I'm having problems using mat_view in petsc 3.4.3 in combination
with a prefix. For example in ../snes/examples/tutorials/ex70:

mpiexec -n 2 ./ex70 -nx 16 -ny 24 -ksp_type fgmres -pc_type fieldsplit 
-pc_fieldsplit_type schur -pc_fieldsplit_schur_fact_type lower -user_ksp 
-a00_mat_view

does not print the matrix a00 to screen. This used to work in 3.3
versions before the single consistent -xxx_view scheme.

Similarly, if I add this at line 105 of
../ksp/ksp/examples/tutorials/ex1f.F:

  call MatSetOptionsPrefix(A,a_,ierr)

then running with -mat_view still prints the matrix to screen but
running with -a_mat_view doesn't. I expected the opposite.

The problem only occurs with mat, not with ksp. For example, if I
add this at line 184 of ../ksp/ksp/examples/tutorials/ex1f.F:

  call KSPSetOptionsPrefix(ksp,a_,ierr)

then running with -a_ksp_monitor does print the residuals to
screen and -ksp_monitor doesn't, as expected.


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] webpage layout

2014-03-06 Thread Klaij, Christiaan
Something's wrong with 
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetOption.html
In Firefox 24.3.0, the option description is continued at the bottom of the 
page, after the examples.


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:c.kl...@marin.nl
T +31 317 49 33 44


MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] snes ex62.c header file

2012-09-27 Thread Klaij, Christiaan
If I understand correctly, the first step to running SNES ex62.c is to create 
the header file, however:

$ $PETSC_DIR/bin/pythonscripts/PetscGenerateFEMQuadrature.py dim order dim 1 
laplacian dim order 1 1 gradient $PETSC_DIR/src/snes/examples/tutorials/ex62.h
Traceback (most recent call last):
  File 
/home/CKlaij/ReFRESCO/Dev/trunk/Libs/build/petsc-3.3-p3/bin/pythonscripts/PetscGenerateFEMQuadrature.py,
 line 12, in module
from FIAT.reference_element import default_simplex
ImportError: No module named FIAT.reference_element




dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:C.Klaij at marin.nl
T +31 317 49 33 44

MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] difference between left and right pc

2012-09-26 Thread Klaij, Christiaan
  I can't do that, it's a matshell.


 You can use MatComputeExplicitOperator or MatFDColoring to get the entries.
 Also, make sure the MATSHELL is really, truly a linear operator. You'll get
 very confusing results if you accidentally leave some nonlinearity inside
 that function.

Thanks for pointing this out, very handy indeed for debugging shells.


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:C.Klaij at marin.nl
T +31 317 49 33 44

MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] difference between left and right pc

2012-09-24 Thread Klaij, Christiaan
   What happens if you use -pc_type lu  ?

Barry,

I can't do that, it's a matshell.

Chris



 On Sep 21, 2012, at 3:29 AM, Klaij, Christiaan C.Klaij at marin.nl wrote:

 
  When I use zero initial guess, GMRES with left PC gives a huge
  jump in true resisdual between iteration 0 and 1 and GMRES with
  right PC is stuck, the solution remains zero, as mentioned before.
 
  When I use the Knoll trick, both issues are gone (!) and I do get
  similar results for left and right preconditioning, both for the
  iteration count and for the physics of the solution.
 
  I didn't expect such a difference, did you? If so, why? Somehow
  it must be related to the rhs being quite small.


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:C.Klaij at marin.nl
T +31 317 49 33 44

MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] difference between left and right pc

2012-09-21 Thread Klaij, Christiaan
I'm solving a system with GMRES using the same preconditioner either on
the left or on the right.
For left preconditioning I get two orders of reduction for the
preconditioned residual in 20 its:
   
  
   Notice here that your preconditioner is far from one. It manages to
  blow
   up the true residual by
   5 orders of magnitude, from which it never recovers. The right
   preconditioning just avoids being
   so screwed up.
  
  Matt
 
  Yes, I noticed that. It does recover 2 orders in 20 its, and it
  can recover 5 orders and beyond in a few hundred its. What I
  don't understand is how the same preconditioner applied to the
  right just avoids being so screwed up.
 

 Suppose that your preconditioner has a huge null space, and b fits
 into it. Then right preconditioning would do nothing at all. Some tiny
 bit would creep through since Ab is not entirely in it, but there would
 be a small preconditioned residual with large true residual.

 Matt

Thanks Matt. But don't you mean *left* preconditioning would do
nothing at all? If P^{-1} has a null space and b fits into it
then P^{-1} A x = P^{-1} b = A x = 0 = x = 0. That's not what
I'm seeing.

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:C.Klaij at marin.nl
T +31 317 49 33 44

MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



[petsc-users] difference between left and right pc

2012-09-21 Thread Klaij, Christiaan

When I use zero initial guess, GMRES with left PC gives a huge
jump in true resisdual between iteration 0 and 1 and GMRES with
right PC is stuck, the solution remains zero, as mentioned before.

When I use the Knoll trick, both issues are gone (!) and I do get
similar results for left and right preconditioning, both for the
iteration count and for the physics of the solution.

I didn't expect such a difference, did you? If so, why? Somehow
it must be related to the rhs being quite small.


GMRES, left PC, initial guess zero:
  0 KSP preconditioned resid norm 2.980694554053e+01 true resid norm 
7.058057578378e-05 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 1.121717063399e+01 true resid norm 
2.239445995669e+00 ||r(i)||/||b|| 3.172892783603e+04
  2 KSP preconditioned resid norm 8.419094257245e+00 true resid norm 
4.482707776056e+00 ||r(i)||/||b|| 6.351191848857e+04
  3 KSP preconditioned resid norm 6.113655853636e+00 true resid norm 
1.759217056899e+00 ||r(i)||/||b|| 2.492494623858e+04
  4 KSP preconditioned resid norm 4.949889403847e+00 true resid norm 
3.572848898052e-01 ||r(i)||/||b|| 5.062085224406e+03
  5 KSP preconditioned resid norm 4.187220822242e+00 true resid norm 
7.071876172117e-01 ||r(i)||/||b|| 1.001957846558e+04
  6 KSP preconditioned resid norm 3.598699773848e+00 true resid norm 
6.751395101318e-01 ||r(i)||/||b|| 9.565514344910e+03
  7 KSP preconditioned resid norm 3.024026574700e+00 true resid norm 
5.529170377011e-01 ||r(i)||/||b|| 7.833841415447e+03
  8 KSP preconditioned resid norm 2.609636515722e+00 true resid norm 
5.689992696649e-01 ||r(i)||/||b|| 8.061697759565e+03
  9 KSP preconditioned resid norm 2.254221020819e+00 true resid norm 
4.949259965429e-01 ||r(i)||/||b|| 7.012212510975e+03
 10 KSP preconditioned resid norm 1.873529244708e+00 true resid norm 
6.109183824231e-01 ||r(i)||/||b|| 8.655616302912e+03
 11 KSP preconditioned resid norm 1.505474576580e+00 true resid norm 
4.363808762555e-01 ||r(i)||/||b|| 6.182733300340e+03
 12 KSP preconditioned resid norm 1.273391808351e+00 true resid norm 
5.799473619663e-01 ||r(i)||/||b|| 8.216812565299e+03
 13 KSP preconditioned resid norm 1.092596045026e+00 true resid norm 
5.341297417537e-01 ||r(i)||/||b|| 7.567659172829e+03
 14 KSP preconditioned resid norm 9.145639963916e-01 true resid norm 
4.424631524670e-01 ||r(i)||/||b|| 6.268908230820e+03
 15 KSP preconditioned resid norm 7.619506249149e-01 true resid norm 
4.154893466277e-01 ||r(i)||/||b|| 5.886737845558e+03
 16 KSP preconditioned resid norm 6.305034569873e-01 true resid norm 
4.166530590059e-01 ||r(i)||/||b|| 5.903225559994e+03
 17 KSP preconditioned resid norm 5.020718919136e-01 true resid norm 
3.542538361268e-01 ||r(i)||/||b|| 5.019140637390e+03
 18 KSP preconditioned resid norm 4.099172843566e-01 true resid norm 
2.942812083953e-01 ||r(i)||/||b|| 4.169436209997e+03
 19 KSP preconditioned resid norm 3.456791256934e-01 true resid norm 
2.474858759247e-01 ||r(i)||/||b|| 3.506430390747e+03
 20 KSP preconditioned resid norm 2.730195605094e-01 true resid norm 
2.641558094323e-01 ||r(i)||/||b|| 3.742613410260e+03


GMRES, right PC, initial guess zero:
  0 KSP unpreconditioned resid norm 7.058057578378e-05 true resid norm 
7.058057578378e-05 ||r(i)||/||b|| 1.e+00
  1 KSP unpreconditioned resid norm 7.054747142321e-05 true resid norm 
7.054747142321e-05 ||r(i)||/||b|| 9.995309706643e-01
  2 KSP unpreconditioned resid norm 7.020651831374e-05 true resid norm 
7.020651831373e-05 ||r(i)||/||b|| 9.947002774360e-01
  3 KSP unpreconditioned resid norm 7.006225380599e-05 true resid norm 
7.006225374905e-05 ||r(i)||/||b|| 9.926563076458e-01
  4 KSP unpreconditioned resid norm 7.004188290578e-05 true resid norm 
7.004188287852e-05 ||r(i)||/||b|| 9.923676889953e-01
  5 KSP unpreconditioned resid norm 7.004130975499e-05 true resid norm 
7.004130973557e-05 ||r(i)||/||b|| 9.923595685891e-01
  6 KSP unpreconditioned resid norm 7.002915081650e-05 true resid norm 
7.002915072237e-05 ||r(i)||/||b|| 9.921872972090e-01
  7 KSP unpreconditioned resid norm 6.992906439247e-05 true resid norm 
6.992906454226e-05 ||r(i)||/||b|| 9.907692557861e-01
  8 KSP unpreconditioned resid norm 6.992498998319e-05 true resid norm 
6.992499016218e-05 ||r(i)||/||b|| 9.907115291379e-01
  9 KSP unpreconditioned resid norm 6.992334551935e-05 true resid norm 
6.992334572069e-05 ||r(i)||/||b|| 9.906882303552e-01
 10 KSP unpreconditioned resid norm 6.992269976389e-05 true resid norm 
6.992269995062e-05 ||r(i)||/||b|| 9.906790809531e-01
 11 KSP unpreconditioned resid norm 6.992074987133e-05 true resid norm 
6.992075003662e-05 ||r(i)||/||b|| 9.906514541736e-01
 12 KSP unpreconditioned resid norm 6.991044260131e-05 true resid norm 
6.991044279866e-05 ||r(i)||/||b|| 9.905054191230e-01
 13 KSP unpreconditioned resid norm 6.990672948921e-05 true resid norm 
6.990672970817e-05 ||r(i)||/||b|| 9.904528112992e-01
 14 KSP unpreconditioned resid norm 6.990672944080e-05 true resid norm 
6.990672965979e-05 

[petsc-users] difference between left and right pc

2012-09-20 Thread Klaij, Christiaan
  I'm solving a system with GMRES using the same preconditioner either on
  the left or on the right.
  For left preconditioning I get two orders of reduction for the
  preconditioned residual in 20 its:
 

 Notice here that your preconditioner is far from one. It manages to blow
 up the true residual by
 5 orders of magnitude, from which it never recovers. The right
 preconditioning just avoids being
 so screwed up.

Matt

Yes, I noticed that. It does recover 2 orders in 20 its, and it
can recover 5 orders and beyond in a few hundred its. What I
don't understand is how the same preconditioner applied to the
right just avoids being so screwed up.

Chris


dr. ir. Christiaan Klaij
CFD Researcher
Research  Development
E mailto:C.Klaij at marin.nl
T +31 317 49 33 44

MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl



  1   2   >