Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 9:20 PM, Barry Smith  wrote:

>
>   Olivier,
>
> Ok, so I've run the code in the debugger, but I don't not think the
> problem is with the null space. The code is correctly removing the null
> space on all the levels of multigrid.
>
> I think the error comes from changes in the behavior of GAMG. GAMG is
> relatively rapidly moving with different defaults and even different code
> with each release.
>
> To check this I added the option -poisson_mg_levels_pc_sor_lits 2 and
> it stopped complaining about KSP_DIVERGED_INDEFINITE_PC. I've seen this
> before where the smoother is "too weak" and so the net result is that
> action of the preconditioner is indefinite. Mark Adams probably has better
> suggestions on how to make the preconditioner behave. Note you could also
> use a KSP of richardson or gmres instead of cg since they don't care about
> this indefinite business.


I think old GAMG squared the graph by default. You can see in the 3.7
output that it does not.

   Matt


>
>Barry
>
>
>
> > On Oct 25, 2016, at 5:39 PM, Olivier Mesnard 
> wrote:
> >
> > On 25 October 2016 at 17:51, Barry Smith  wrote:
> >
> >   Olivier,
> >
> > In theory you do not need to change anything else. Are you using a
> different matrix object for the velocity_ksp object than the poisson_ksp
> object?
> >
> > ​The matrix is different for the velocity_ksp and the poisson_ksp​.
> >
> > The code change in PETSc is very little but we have a report from
> another CFD user who also had problems with the change so there may be some
> subtle bug that we can't figure out causing things to not behave properly.
> >
> >First run the 3.7.4 code with -poisson_ksp_view and verify that when
> it prints the matrix information it prints something like has attached null
> space if it does not print that it means that somehow the matrix is not
> properly getting the matrix attached.
> >
> > ​When running with 3.7.4 and -poisson_ksp_view, the output shows that
> the nullspace is not attached to the KSP (as it was with 3.5.4)​; however
> the print statement is now under the Mat info (which is expected when
> moving from KSPSetNullSpace to MatSetNullSpace?).
> >
> > Though older versions had MatSetNullSpace() they didn't necessarily
> associate it with the KSP so it was not expected to work as a replacement
> for KSPSetNullSpace() with older versions.
> >
> > Because our other user had great difficulty trying to debug the
> issue feel free to send us at petsc-ma...@mcs.anl.gov your code with
> instructions on building and running and we can try to track down the
> problem. Better than hours and hours spent with fruitless email. We will,
> of course, not distribute the code and will delete in when we are finished
> with it.
> >
> > ​The code is open-source and hosted on GitHub (https://github.com/
> barbagroup/PetIBM)​.
> > I just pushed the branches `feature-compatible-petsc-3.7` and
> `revert-compatible-petsc-3.5` that I used to observe this problem.
> >
> > PETSc (both 3.5.4 and 3.7.4) was configured as follow:
> > export PETSC_ARCH="linux-gnu-dbg"
> > ./configure --PETSC_ARCH=$PETSC_ARCH \
> >   --with-cc=gcc \
> >   --with-cxx=g++ \
> >   --with-fc=gfortran \
> >   --COPTFLAGS="-O0" \
> >   --CXXOPTFLAGS="-O0" \
> >   --FOPTFLAGS="-O0" \
> >   --with-debugging=1 \
> >   --download-fblaslapack \
> >   --download-mpich \
> >   --download-hypre \
> >   --download-yaml \
> >   --with-x=1
> >
> > Our code was built using the following commands:​
> > mkdir petibm-build
> > cd petibm-build
> > ​export PETSC_DIR=
> > export PETSC_ARCH="linux-gnu-dbg"
> > export PETIBM_DIR=
> > $PETIBM_DIR/configure --prefix=$PWD \
> >   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
> >   CXXFLAGS="-g -O0 -std=c++11"​
> > make all
> > make install
> >
> > ​Then
> > cd examples
> > make examples​
> >
> > ​The example of the lid-driven cavity I was talking about can be found
> in the folder `examples/2d/convergence/lidDrivenCavity20/20/`​
> >
> > To run it:
> > mpiexec -n N /bin/petibm2d -directory
> 
> >
> > Let me know if you need more info. Thank you.
> >
> >Barry
> >
> >
> >
> >
> >
> >
> >
> >
> > > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard <
> olivier.mesna...@gmail.com> wrote:
> > >
> > > Hi all,
> > >
> > > We develop a CFD code using the PETSc library that solves the
> Navier-Stokes equations using the fractional-step method from Perot (1993).
> > > At each time-step, we solve two systems: one for the velocity field,
> the other, a Poisson system, for the pressure field.
> > > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a
> 20x20 grid using 1 or 2 procs.
> > > For the Poisson system, we usually use CG preconditioned with GAMG.
> > >
> > > So far, we have been using PETSc-3.5.4, and we would like to update
> the code with the latest release: 3.7.4.
> > >
> > > As 

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Barry Smith

  Olivier,

Ok, so I've run the code in the debugger, but I don't not think the problem 
is with the null space. The code is correctly removing the null space on all 
the levels of multigrid.

I think the error comes from changes in the behavior of GAMG. GAMG is 
relatively rapidly moving with different defaults and even different code with 
each release.

To check this I added the option -poisson_mg_levels_pc_sor_lits 2 and it 
stopped complaining about KSP_DIVERGED_INDEFINITE_PC. I've seen this before 
where the smoother is "too weak" and so the net result is that action of the 
preconditioner is indefinite. Mark Adams probably has better suggestions on how 
to make the preconditioner behave. Note you could also use a KSP of richardson 
or gmres instead of cg since they don't care about this indefinite business.

   Barry



> On Oct 25, 2016, at 5:39 PM, Olivier Mesnard  
> wrote:
> 
> On 25 October 2016 at 17:51, Barry Smith  wrote:
> 
>   Olivier,
> 
> In theory you do not need to change anything else. Are you using a 
> different matrix object for the velocity_ksp object than the poisson_ksp 
> object?
> 
> ​The matrix is different for the velocity_ksp and the poisson_ksp​.
>  
> The code change in PETSc is very little but we have a report from another 
> CFD user who also had problems with the change so there may be some subtle 
> bug that we can't figure out causing things to not behave properly.
> 
>First run the 3.7.4 code with -poisson_ksp_view and verify that when it 
> prints the matrix information it prints something like has attached null 
> space if it does not print that it means that somehow the matrix is not 
> properly getting the matrix attached.
> 
> ​When running with 3.7.4 and -poisson_ksp_view, the output shows that the 
> nullspace is not attached to the KSP (as it was with 3.5.4)​; however the 
> print statement is now under the Mat info (which is expected when moving from 
> KSPSetNullSpace to MatSetNullSpace?).
> 
> Though older versions had MatSetNullSpace() they didn't necessarily 
> associate it with the KSP so it was not expected to work as a replacement for 
> KSPSetNullSpace() with older versions.
> 
> Because our other user had great difficulty trying to debug the issue 
> feel free to send us at petsc-ma...@mcs.anl.gov your code with instructions 
> on building and running and we can try to track down the problem. Better than 
> hours and hours spent with fruitless email. We will, of course, not 
> distribute the code and will delete in when we are finished with it.
> 
> ​The code is open-source and hosted on GitHub 
> (https://github.com/barbagroup/PetIBM)​.
> I just pushed the branches `feature-compatible-petsc-3.7` and 
> `revert-compatible-petsc-3.5` that I used to observe this problem.
> 
> PETSc (both 3.5.4 and 3.7.4) was configured as follow:
> export PETSC_ARCH="linux-gnu-dbg"
> ./configure --PETSC_ARCH=$PETSC_ARCH \
>   --with-cc=gcc \
>   --with-cxx=g++ \
>   --with-fc=gfortran \
>   --COPTFLAGS="-O0" \
>   --CXXOPTFLAGS="-O0" \
>   --FOPTFLAGS="-O0" \
>   --with-debugging=1 \
>   --download-fblaslapack \
>   --download-mpich \
>   --download-hypre \
>   --download-yaml \
>   --with-x=1
> 
> Our code was built using the following commands:​
> mkdir petibm-build
> cd petibm-build
> ​export PETSC_DIR=
> export PETSC_ARCH="linux-gnu-dbg"
> export PETIBM_DIR=
> $PETIBM_DIR/configure --prefix=$PWD \
>   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
>   CXXFLAGS="-g -O0 -std=c++11"​
> make all
> make install
> 
> ​Then
> cd examples
> make examples​
> 
> ​The example of the lid-driven cavity I was talking about can be found in the 
> folder `examples/2d/convergence/lidDrivenCavity20/20/`​
> 
> To run it:
> mpiexec -n N /bin/petibm2d -directory 
> 
> Let me know if you need more info. Thank you.
> 
>Barry
> 
> 
> 
> 
> 
> 
> 
> 
> > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard  
> > wrote:
> >
> > Hi all,
> >
> > We develop a CFD code using the PETSc library that solves the Navier-Stokes 
> > equations using the fractional-step method from Perot (1993).
> > At each time-step, we solve two systems: one for the velocity field, the 
> > other, a Poisson system, for the pressure field.
> > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a 20x20 
> > grid using 1 or 2 procs.
> > For the Poisson system, we usually use CG preconditioned with GAMG.
> >
> > So far, we have been using PETSc-3.5.4, and we would like to update the 
> > code with the latest release: 3.7.4.
> >
> > As suggested in the changelog of 3.6, we replaced the routine 
> > `KSPSetNullSpace()` with `MatSetNullSpace()`.
> >
> > Here is the list of options we use to configure the two solvers:
> > * Velocity solver: prefix `-velocity_`
> >   -velocity_ksp_type bcgs
> >   -velocity_ksp_rtol 1.0E-08
> >   -velocity_ksp_atol 0.0
> >   

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Barry Smith

> On Oct 25, 2016, at 8:28 PM, Matthew Knepley  wrote:
> 
> On Tue, Oct 25, 2016 at 7:57 PM, Olivier Mesnard  
> wrote:
> On 25 October 2016 at 19:59, Matthew Knepley  wrote:
> On Tue, Oct 25, 2016 at 6:22 PM, Barry Smith  wrote:
> 
> > On Oct 25, 2016, at 5:39 PM, Olivier Mesnard  
> > wrote:
> >
> > On 25 October 2016 at 17:51, Barry Smith  wrote:
> >
> >   Olivier,
> >
> > In theory you do not need to change anything else. Are you using a 
> > different matrix object for the velocity_ksp object than the poisson_ksp 
> > object?
> >
> > ​The matrix is different for the velocity_ksp and the poisson_ksp​.
> >
> > The code change in PETSc is very little but we have a report from 
> > another CFD user who also had problems with the change so there may be some 
> > subtle bug that we can't figure out causing things to not behave properly.
> >
> >First run the 3.7.4 code with -poisson_ksp_view and verify that when it 
> > prints the matrix information it prints something like has attached null 
> > space if it does not print that it means that somehow the matrix is not 
> > properly getting the matrix attached.
> >
> > ​When running with 3.7.4 and -poisson_ksp_view, the output shows that the 
> > nullspace is not attached to the KSP (as it was with 3.5.4)​; however the 
> > print statement is now under the Mat info (which is expected when moving 
> > from KSPSetNullSpace to MatSetNullSpace?).
> 
>Good, this is how it should be.
> >
> > Though older versions had MatSetNullSpace() they didn't necessarily 
> > associate it with the KSP so it was not expected to work as a replacement 
> > for KSPSetNullSpace() with older versions.
> >
> > Because our other user had great difficulty trying to debug the issue 
> > feel free to send us at petsc-ma...@mcs.anl.gov your code with instructions 
> > on building and running and we can try to track down the problem. Better 
> > than hours and hours spent with fruitless email. We will, of course, not 
> > distribute the code and will delete in when we are finished with it.
> >
> > ​The code is open-source and hosted on GitHub 
> > (https://github.com/barbagroup/PetIBM)​.
> > I just pushed the branches `feature-compatible-petsc-3.7` and 
> > `revert-compatible-petsc-3.5` that I used to observe this problem.
> >
> Thanks, I'll get back to you if I discover anything
> 
> Obviously GAMG is behaving quite differently (1 vs 2 levels and a much 
> sparser coarse problem in 3.7).
> 
> Could you try one thing for me before we start running it? Run with
> 
>   -poisson_mg_coarse_sub_pc_type svd
> 
> and see what happens on 2 procs for 3.7?
> 
> ​Hi Matt,
> 
> With -poisson_mg_coarse_sub_pc_type svd​,​
> it ran normally on 1 proc but not on 2 procs the end of the output says:
> 
> "** On entry to DGESVD parameter number  6 had an illegal value"
> 
> Something is wrong with your 3.7 installation. That parameter is a simple 
> size, so I suspect
> memory corruption. Run with valgrind.

  Matt,

   This is our bug, I am fixing it now. Turns out DGESVD will error out if you 
give it a vector size of zero (and since GAMG squeezes the coarse matrix to one 
process leaving the others empty) so PETSc needs to just return when the matrix 
is of zero size instead of calling DGESVD.

  Barry
> 
>   Thanks,
> 
>  Matt
>  
> I attached the log file.
> ​ 
>   Thanks,
> 
>  Matt
>  
> 
> > PETSc (both 3.5.4 and 3.7.4) was configured as follow:
> > export PETSC_ARCH="linux-gnu-dbg"
> > ./configure --PETSC_ARCH=$PETSC_ARCH \
> >   --with-cc=gcc \
> >   --with-cxx=g++ \
> >   --with-fc=gfortran \
> >   --COPTFLAGS="-O0" \
> >   --CXXOPTFLAGS="-O0" \
> >   --FOPTFLAGS="-O0" \
> >   --with-debugging=1 \
> >   --download-fblaslapack \
> >   --download-mpich \
> >   --download-hypre \
> >   --download-yaml \
> >   --with-x=1
> >
> > Our code was built using the following commands:​
> > mkdir petibm-build
> > cd petibm-build
> > ​export PETSC_DIR=
> > export PETSC_ARCH="linux-gnu-dbg"
> > export PETIBM_DIR=
> > $PETIBM_DIR/configure --prefix=$PWD \
> >   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
> >   CXXFLAGS="-g -O0 -std=c++11"​
> > make all
> > make install
> >
> > ​Then
> > cd examples
> > make examples​
> >
> > ​The example of the lid-driven cavity I was talking about can be found in 
> > the folder `examples/2d/convergence/lidDrivenCavity20/20/`​
> >
> > To run it:
> > mpiexec -n N /bin/petibm2d -directory 
> > 
> >
> > Let me know if you need more info. Thank you.
> >
> >Barry
> >
> >
> >
> >
> >
> >
> >
> >
> > > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard  
> > > wrote:
> > >
> > > Hi all,
> > >
> > > We develop a CFD code using the PETSc library that solves the 
> > > Navier-Stokes equations using the fractional-step method from Perot 
> > > 

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 7:57 PM, Olivier Mesnard  wrote:

> On 25 October 2016 at 19:59, Matthew Knepley  wrote:
>
>> On Tue, Oct 25, 2016 at 6:22 PM, Barry Smith  wrote:
>>
>>>
>>> > On Oct 25, 2016, at 5:39 PM, Olivier Mesnard <
>>> olivier.mesna...@gmail.com> wrote:
>>> >
>>> > On 25 October 2016 at 17:51, Barry Smith  wrote:
>>> >
>>> >   Olivier,
>>> >
>>> > In theory you do not need to change anything else. Are you using a
>>> different matrix object for the velocity_ksp object than the poisson_ksp
>>> object?
>>> >
>>> > ​The matrix is different for the velocity_ksp and the poisson_ksp​.
>>> >
>>> > The code change in PETSc is very little but we have a report from
>>> another CFD user who also had problems with the change so there may be some
>>> subtle bug that we can't figure out causing things to not behave properly.
>>> >
>>> >First run the 3.7.4 code with -poisson_ksp_view and verify that
>>> when it prints the matrix information it prints something like has attached
>>> null space if it does not print that it means that somehow the matrix is
>>> not properly getting the matrix attached.
>>> >
>>> > ​When running with 3.7.4 and -poisson_ksp_view, the output shows that
>>> the nullspace is not attached to the KSP (as it was with 3.5.4)​; however
>>> the print statement is now under the Mat info (which is expected when
>>> moving from KSPSetNullSpace to MatSetNullSpace?).
>>>
>>>Good, this is how it should be.
>>> >
>>> > Though older versions had MatSetNullSpace() they didn't
>>> necessarily associate it with the KSP so it was not expected to work as a
>>> replacement for KSPSetNullSpace() with older versions.
>>> >
>>> > Because our other user had great difficulty trying to debug the
>>> issue feel free to send us at petsc-ma...@mcs.anl.gov your code with
>>> instructions on building and running and we can try to track down the
>>> problem. Better than hours and hours spent with fruitless email. We will,
>>> of course, not distribute the code and will delete in when we are finished
>>> with it.
>>> >
>>> > ​The code is open-source and hosted on GitHub (
>>> https://github.com/barbagroup/PetIBM)​.
>>> > I just pushed the branches `feature-compatible-petsc-3.7` and
>>> `revert-compatible-petsc-3.5` that I used to observe this problem.
>>> >
>>> Thanks, I'll get back to you if I discover anything
>>
>>
>> Obviously GAMG is behaving quite differently (1 vs 2 levels and a much
>> sparser coarse problem in 3.7).
>>
>> Could you try one thing for me before we start running it? Run with
>>
>>   -poisson_mg_coarse_sub_pc_type svd
>>
>> and see what happens on 2 procs for 3.7?
>>
>> ​Hi Matt,
>
> With
> -poisson_mg_coarse_sub_pc_type svd
> ​,​
> it ran normally on 1 proc but not on 2 procs the end of the output says:
>
> "** On entry to DGESVD parameter number  6 had an illegal value
> "
>

Something is wrong with your 3.7 installation. That parameter is a simple
size, so I suspect
memory corruption. Run with valgrind.

  Thanks,

 Matt


> I attached the log file.
> ​
>
>
>>   Thanks,
>>
>>  Matt
>>
>>
>>>
>>> > PETSc (both 3.5.4 and 3.7.4) was configured as follow:
>>> > export PETSC_ARCH="linux-gnu-dbg"
>>> > ./configure --PETSC_ARCH=$PETSC_ARCH \
>>> >   --with-cc=gcc \
>>> >   --with-cxx=g++ \
>>> >   --with-fc=gfortran \
>>> >   --COPTFLAGS="-O0" \
>>> >   --CXXOPTFLAGS="-O0" \
>>> >   --FOPTFLAGS="-O0" \
>>> >   --with-debugging=1 \
>>> >   --download-fblaslapack \
>>> >   --download-mpich \
>>> >   --download-hypre \
>>> >   --download-yaml \
>>> >   --with-x=1
>>> >
>>> > Our code was built using the following commands:​
>>> > mkdir petibm-build
>>> > cd petibm-build
>>> > ​export PETSC_DIR=
>>> > export PETSC_ARCH="linux-gnu-dbg"
>>> > export PETIBM_DIR=
>>> > $PETIBM_DIR/configure --prefix=$PWD \
>>> >   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
>>> >   CXXFLAGS="-g -O0 -std=c++11"​
>>> > make all
>>> > make install
>>> >
>>> > ​Then
>>> > cd examples
>>> > make examples​
>>> >
>>> > ​The example of the lid-driven cavity I was talking about can be found
>>> in the folder `examples/2d/convergence/lidDrivenCavity20/20/`​
>>> >
>>> > To run it:
>>> > mpiexec -n N /bin/petibm2d -directory
>>> 
>>> >
>>> > Let me know if you need more info. Thank you.
>>> >
>>> >Barry
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard <
>>> olivier.mesna...@gmail.com> wrote:
>>> > >
>>> > > Hi all,
>>> > >
>>> > > We develop a CFD code using the PETSc library that solves the
>>> Navier-Stokes equations using the fractional-step method from Perot (1993).
>>> > > At each time-step, we solve two systems: one for the velocity field,
>>> the other, a Poisson system, for the pressure field.
>>> > > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a
>>> 20x20 grid using 1 or 2 

[petsc-users] How to scatter values

2016-10-25 Thread ztdepya...@163.com

Dear professor:
 I  partitioned my 2D cartesian  
grid with  4rows*4cols CPUS.

   12  13 14 15
     8   9  10  11
4   5  6  7
0   1  2   3 
  
   Now i need to scatter the values  
belonging to cpu5 to every cpu along 
 x direction( 4 5 6 7) , which  
function should i use.


Regards




















Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Olivier Mesnard
On 25 October 2016 at 19:59, Matthew Knepley  wrote:

> On Tue, Oct 25, 2016 at 6:22 PM, Barry Smith  wrote:
>
>>
>> > On Oct 25, 2016, at 5:39 PM, Olivier Mesnard <
>> olivier.mesna...@gmail.com> wrote:
>> >
>> > On 25 October 2016 at 17:51, Barry Smith  wrote:
>> >
>> >   Olivier,
>> >
>> > In theory you do not need to change anything else. Are you using a
>> different matrix object for the velocity_ksp object than the poisson_ksp
>> object?
>> >
>> > ​The matrix is different for the velocity_ksp and the poisson_ksp​.
>> >
>> > The code change in PETSc is very little but we have a report from
>> another CFD user who also had problems with the change so there may be some
>> subtle bug that we can't figure out causing things to not behave properly.
>> >
>> >First run the 3.7.4 code with -poisson_ksp_view and verify that when
>> it prints the matrix information it prints something like has attached null
>> space if it does not print that it means that somehow the matrix is not
>> properly getting the matrix attached.
>> >
>> > ​When running with 3.7.4 and -poisson_ksp_view, the output shows that
>> the nullspace is not attached to the KSP (as it was with 3.5.4)​; however
>> the print statement is now under the Mat info (which is expected when
>> moving from KSPSetNullSpace to MatSetNullSpace?).
>>
>>Good, this is how it should be.
>> >
>> > Though older versions had MatSetNullSpace() they didn't necessarily
>> associate it with the KSP so it was not expected to work as a replacement
>> for KSPSetNullSpace() with older versions.
>> >
>> > Because our other user had great difficulty trying to debug the
>> issue feel free to send us at petsc-ma...@mcs.anl.gov your code with
>> instructions on building and running and we can try to track down the
>> problem. Better than hours and hours spent with fruitless email. We will,
>> of course, not distribute the code and will delete in when we are finished
>> with it.
>> >
>> > ​The code is open-source and hosted on GitHub (
>> https://github.com/barbagroup/PetIBM)​.
>> > I just pushed the branches `feature-compatible-petsc-3.7` and
>> `revert-compatible-petsc-3.5` that I used to observe this problem.
>> >
>> Thanks, I'll get back to you if I discover anything
>
>
> Obviously GAMG is behaving quite differently (1 vs 2 levels and a much
> sparser coarse problem in 3.7).
>
> Could you try one thing for me before we start running it? Run with
>
>   -poisson_mg_coarse_sub_pc_type svd
>
> and see what happens on 2 procs for 3.7?
>
> ​Hi Matt,

With
-poisson_mg_coarse_sub_pc_type svd
​,​
it ran normally on 1 proc but not on 2 procs the end of the output says:

"** On entry to DGESVD parameter number  6 had an illegal value
"

I attached the log file.
​


>   Thanks,
>
>  Matt
>
>
>>
>> > PETSc (both 3.5.4 and 3.7.4) was configured as follow:
>> > export PETSC_ARCH="linux-gnu-dbg"
>> > ./configure --PETSC_ARCH=$PETSC_ARCH \
>> >   --with-cc=gcc \
>> >   --with-cxx=g++ \
>> >   --with-fc=gfortran \
>> >   --COPTFLAGS="-O0" \
>> >   --CXXOPTFLAGS="-O0" \
>> >   --FOPTFLAGS="-O0" \
>> >   --with-debugging=1 \
>> >   --download-fblaslapack \
>> >   --download-mpich \
>> >   --download-hypre \
>> >   --download-yaml \
>> >   --with-x=1
>> >
>> > Our code was built using the following commands:​
>> > mkdir petibm-build
>> > cd petibm-build
>> > ​export PETSC_DIR=
>> > export PETSC_ARCH="linux-gnu-dbg"
>> > export PETIBM_DIR=
>> > $PETIBM_DIR/configure --prefix=$PWD \
>> >   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
>> >   CXXFLAGS="-g -O0 -std=c++11"​
>> > make all
>> > make install
>> >
>> > ​Then
>> > cd examples
>> > make examples​
>> >
>> > ​The example of the lid-driven cavity I was talking about can be found
>> in the folder `examples/2d/convergence/lidDrivenCavity20/20/`​
>> >
>> > To run it:
>> > mpiexec -n N /bin/petibm2d -directory
>> 
>> >
>> > Let me know if you need more info. Thank you.
>> >
>> >Barry
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard <
>> olivier.mesna...@gmail.com> wrote:
>> > >
>> > > Hi all,
>> > >
>> > > We develop a CFD code using the PETSc library that solves the
>> Navier-Stokes equations using the fractional-step method from Perot (1993).
>> > > At each time-step, we solve two systems: one for the velocity field,
>> the other, a Poisson system, for the pressure field.
>> > > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a
>> 20x20 grid using 1 or 2 procs.
>> > > For the Poisson system, we usually use CG preconditioned with GAMG.
>> > >
>> > > So far, we have been using PETSc-3.5.4, and we would like to update
>> the code with the latest release: 3.7.4.
>> > >
>> > > As suggested in the changelog of 3.6, we replaced the routine
>> `KSPSetNullSpace()` with `MatSetNullSpace()`.
>> > >
>> > > Here is the list of options we use to configure 

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 6:22 PM, Barry Smith  wrote:

>
> > On Oct 25, 2016, at 5:39 PM, Olivier Mesnard 
> wrote:
> >
> > On 25 October 2016 at 17:51, Barry Smith  wrote:
> >
> >   Olivier,
> >
> > In theory you do not need to change anything else. Are you using a
> different matrix object for the velocity_ksp object than the poisson_ksp
> object?
> >
> > ​The matrix is different for the velocity_ksp and the poisson_ksp​.
> >
> > The code change in PETSc is very little but we have a report from
> another CFD user who also had problems with the change so there may be some
> subtle bug that we can't figure out causing things to not behave properly.
> >
> >First run the 3.7.4 code with -poisson_ksp_view and verify that when
> it prints the matrix information it prints something like has attached null
> space if it does not print that it means that somehow the matrix is not
> properly getting the matrix attached.
> >
> > ​When running with 3.7.4 and -poisson_ksp_view, the output shows that
> the nullspace is not attached to the KSP (as it was with 3.5.4)​; however
> the print statement is now under the Mat info (which is expected when
> moving from KSPSetNullSpace to MatSetNullSpace?).
>
>Good, this is how it should be.
> >
> > Though older versions had MatSetNullSpace() they didn't necessarily
> associate it with the KSP so it was not expected to work as a replacement
> for KSPSetNullSpace() with older versions.
> >
> > Because our other user had great difficulty trying to debug the
> issue feel free to send us at petsc-ma...@mcs.anl.gov your code with
> instructions on building and running and we can try to track down the
> problem. Better than hours and hours spent with fruitless email. We will,
> of course, not distribute the code and will delete in when we are finished
> with it.
> >
> > ​The code is open-source and hosted on GitHub (https://github.com/
> barbagroup/PetIBM)​.
> > I just pushed the branches `feature-compatible-petsc-3.7` and
> `revert-compatible-petsc-3.5` that I used to observe this problem.
> >
> Thanks, I'll get back to you if I discover anything


Obviously GAMG is behaving quite differently (1 vs 2 levels and a much
sparser coarse problem in 3.7).

Could you try one thing for me before we start running it? Run with

  -poisson_mg_coarse_sub_pc_type svd

and see what happens on 2 procs for 3.7?

  Thanks,

 Matt


>
> > PETSc (both 3.5.4 and 3.7.4) was configured as follow:
> > export PETSC_ARCH="linux-gnu-dbg"
> > ./configure --PETSC_ARCH=$PETSC_ARCH \
> >   --with-cc=gcc \
> >   --with-cxx=g++ \
> >   --with-fc=gfortran \
> >   --COPTFLAGS="-O0" \
> >   --CXXOPTFLAGS="-O0" \
> >   --FOPTFLAGS="-O0" \
> >   --with-debugging=1 \
> >   --download-fblaslapack \
> >   --download-mpich \
> >   --download-hypre \
> >   --download-yaml \
> >   --with-x=1
> >
> > Our code was built using the following commands:​
> > mkdir petibm-build
> > cd petibm-build
> > ​export PETSC_DIR=
> > export PETSC_ARCH="linux-gnu-dbg"
> > export PETIBM_DIR=
> > $PETIBM_DIR/configure --prefix=$PWD \
> >   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
> >   CXXFLAGS="-g -O0 -std=c++11"​
> > make all
> > make install
> >
> > ​Then
> > cd examples
> > make examples​
> >
> > ​The example of the lid-driven cavity I was talking about can be found
> in the folder `examples/2d/convergence/lidDrivenCavity20/20/`​
> >
> > To run it:
> > mpiexec -n N /bin/petibm2d -directory
> 
> >
> > Let me know if you need more info. Thank you.
> >
> >Barry
> >
> >
> >
> >
> >
> >
> >
> >
> > > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard <
> olivier.mesna...@gmail.com> wrote:
> > >
> > > Hi all,
> > >
> > > We develop a CFD code using the PETSc library that solves the
> Navier-Stokes equations using the fractional-step method from Perot (1993).
> > > At each time-step, we solve two systems: one for the velocity field,
> the other, a Poisson system, for the pressure field.
> > > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a
> 20x20 grid using 1 or 2 procs.
> > > For the Poisson system, we usually use CG preconditioned with GAMG.
> > >
> > > So far, we have been using PETSc-3.5.4, and we would like to update
> the code with the latest release: 3.7.4.
> > >
> > > As suggested in the changelog of 3.6, we replaced the routine
> `KSPSetNullSpace()` with `MatSetNullSpace()`.
> > >
> > > Here is the list of options we use to configure the two solvers:
> > > * Velocity solver: prefix `-velocity_`
> > >   -velocity_ksp_type bcgs
> > >   -velocity_ksp_rtol 1.0E-08
> > >   -velocity_ksp_atol 0.0
> > >   -velocity_ksp_max_it 1
> > >   -velocity_pc_type jacobi
> > >   -velocity_ksp_view
> > >   -velocity_ksp_monitor_true_residual
> > >   -velocity_ksp_converged_reason
> > > * Poisson solver: prefix `-poisson_`
> > >   -poisson_ksp_type cg
> > >   -poisson_ksp_rtol 1.0E-08
> 

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Barry Smith

> On Oct 25, 2016, at 5:39 PM, Olivier Mesnard  
> wrote:
> 
> On 25 October 2016 at 17:51, Barry Smith  wrote:
> 
>   Olivier,
> 
> In theory you do not need to change anything else. Are you using a 
> different matrix object for the velocity_ksp object than the poisson_ksp 
> object?
> 
> ​The matrix is different for the velocity_ksp and the poisson_ksp​.
>  
> The code change in PETSc is very little but we have a report from another 
> CFD user who also had problems with the change so there may be some subtle 
> bug that we can't figure out causing things to not behave properly.
> 
>First run the 3.7.4 code with -poisson_ksp_view and verify that when it 
> prints the matrix information it prints something like has attached null 
> space if it does not print that it means that somehow the matrix is not 
> properly getting the matrix attached.
> 
> ​When running with 3.7.4 and -poisson_ksp_view, the output shows that the 
> nullspace is not attached to the KSP (as it was with 3.5.4)​; however the 
> print statement is now under the Mat info (which is expected when moving from 
> KSPSetNullSpace to MatSetNullSpace?).

   Good, this is how it should be.
> 
> Though older versions had MatSetNullSpace() they didn't necessarily 
> associate it with the KSP so it was not expected to work as a replacement for 
> KSPSetNullSpace() with older versions.
> 
> Because our other user had great difficulty trying to debug the issue 
> feel free to send us at petsc-ma...@mcs.anl.gov your code with instructions 
> on building and running and we can try to track down the problem. Better than 
> hours and hours spent with fruitless email. We will, of course, not 
> distribute the code and will delete in when we are finished with it.
> 
> ​The code is open-source and hosted on GitHub 
> (https://github.com/barbagroup/PetIBM)​.
> I just pushed the branches `feature-compatible-petsc-3.7` and 
> `revert-compatible-petsc-3.5` that I used to observe this problem.
> 
Thanks, I'll get back to you if I discover anything



> PETSc (both 3.5.4 and 3.7.4) was configured as follow:
> export PETSC_ARCH="linux-gnu-dbg"
> ./configure --PETSC_ARCH=$PETSC_ARCH \
>   --with-cc=gcc \
>   --with-cxx=g++ \
>   --with-fc=gfortran \
>   --COPTFLAGS="-O0" \
>   --CXXOPTFLAGS="-O0" \
>   --FOPTFLAGS="-O0" \
>   --with-debugging=1 \
>   --download-fblaslapack \
>   --download-mpich \
>   --download-hypre \
>   --download-yaml \
>   --with-x=1
> 
> Our code was built using the following commands:​
> mkdir petibm-build
> cd petibm-build
> ​export PETSC_DIR=
> export PETSC_ARCH="linux-gnu-dbg"
> export PETIBM_DIR=
> $PETIBM_DIR/configure --prefix=$PWD \
>   CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
>   CXXFLAGS="-g -O0 -std=c++11"​
> make all
> make install
> 
> ​Then
> cd examples
> make examples​
> 
> ​The example of the lid-driven cavity I was talking about can be found in the 
> folder `examples/2d/convergence/lidDrivenCavity20/20/`​
> 
> To run it:
> mpiexec -n N /bin/petibm2d -directory 
> 
> Let me know if you need more info. Thank you.
> 
>Barry
> 
> 
> 
> 
> 
> 
> 
> 
> > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard  
> > wrote:
> >
> > Hi all,
> >
> > We develop a CFD code using the PETSc library that solves the Navier-Stokes 
> > equations using the fractional-step method from Perot (1993).
> > At each time-step, we solve two systems: one for the velocity field, the 
> > other, a Poisson system, for the pressure field.
> > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a 20x20 
> > grid using 1 or 2 procs.
> > For the Poisson system, we usually use CG preconditioned with GAMG.
> >
> > So far, we have been using PETSc-3.5.4, and we would like to update the 
> > code with the latest release: 3.7.4.
> >
> > As suggested in the changelog of 3.6, we replaced the routine 
> > `KSPSetNullSpace()` with `MatSetNullSpace()`.
> >
> > Here is the list of options we use to configure the two solvers:
> > * Velocity solver: prefix `-velocity_`
> >   -velocity_ksp_type bcgs
> >   -velocity_ksp_rtol 1.0E-08
> >   -velocity_ksp_atol 0.0
> >   -velocity_ksp_max_it 1
> >   -velocity_pc_type jacobi
> >   -velocity_ksp_view
> >   -velocity_ksp_monitor_true_residual
> >   -velocity_ksp_converged_reason
> > * Poisson solver: prefix `-poisson_`
> >   -poisson_ksp_type cg
> >   -poisson_ksp_rtol 1.0E-08
> >   -poisson_ksp_atol 0.0
> >   -poisson_ksp_max_it 2
> >   -poisson_pc_type gamg
> >   -poisson_pc_gamg_type agg
> >   -poisson_pc_gamg_agg_nsmooths 1
> >   -poissonksp_view
> >   -poisson_ksp_monitor_true_residual
> >   -poisson_ksp_converged_reason
> >
> > With 3.5.4, the case runs normally on 1 or 2 procs.
> > With 3.7.4, the case runs normally on 1 proc but not on 2.
> > Why? The Poisson solver diverges because of an indefinite preconditioner 
> > (only with 2 procs).
> >
> > We 

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Olivier Mesnard
On 25 October 2016 at 17:51, Barry Smith  wrote:

>
>   Olivier,
>
> In theory you do not need to change anything else. Are you using a
> different matrix object for the velocity_ksp object than the poisson_ksp
> object?
>
> ​The matrix is different for the velocity_ksp and the poisson_ksp​.


> The code change in PETSc is very little but we have a report from
> another CFD user who also had problems with the change so there may be some
> subtle bug that we can't figure out causing things to not behave properly.
>
>First run the 3.7.4 code with -poisson_ksp_view and verify that when it
> prints the matrix information it prints something like has attached null
> space if it does not print that it means that somehow the matrix is not
> properly getting the matrix attached.
>
> ​When running with 3.7.4 and -poisson_ksp_view, the output shows that the
nullspace is not attached to the KSP (as it was with 3.5.4)​; however the
print statement is now under the Mat info (which is expected when moving
from KSPSetNullSpace to MatSetNullSpace?).

Though older versions had MatSetNullSpace() they didn't necessarily
> associate it with the KSP so it was not expected to work as a replacement
> for KSPSetNullSpace() with older versions.
>
> Because our other user had great difficulty trying to debug the issue
> feel free to send us at petsc-ma...@mcs.anl.gov your code with
> instructions on building and running and we can try to track down the
> problem. Better than hours and hours spent with fruitless email. We will,
> of course, not distribute the code and will delete in when we are finished
> with it.
>
> ​The code is open-source and hosted on GitHub (
https://github.com/barbagroup/PetIBM)​.
I just pushed the branches `feature-compatible-petsc-3.7` and
`revert-compatible-petsc-3.5` that I used to observe this problem.

PETSc (both 3.5.4 and 3.7.4) was configured as follow:
export PETSC_ARCH="linux-gnu-dbg"
./configure --PETSC_ARCH=$PETSC_ARCH \
--with-cc=gcc \
--with-cxx=g++ \
--with-fc=gfortran \
--COPTFLAGS="-O0" \
--CXXOPTFLAGS="-O0" \
--FOPTFLAGS="-O0" \
--with-debugging=1 \
--download-fblaslapack \
--download-mpich \
--download-hypre \
--download-yaml \
--with-x=1

Our code was built using the following commands:​
mkdir petibm-build
cd petibm-build
​export PETSC_DIR=
export PETSC_ARCH="linux-gnu-dbg"
export PETIBM_DIR=
$PETIBM_DIR/configure --prefix=$PWD \
CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
CXXFLAGS="-g -O0 -std=c++11"​
make all
make install

​Then
cd examples
make examples​

​The example of the lid-driven cavity I was talking about can be found in
the folder `examples/2d/convergence/lidDrivenCavity20/20/`​

To run it:
mpiexec -n N /bin/petibm2d -directory


Let me know if you need more info. Thank you.

   Barry
>
>
>
>
>
>
>
>
> > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard 
> wrote:
> >
> > Hi all,
> >
> > We develop a CFD code using the PETSc library that solves the
> Navier-Stokes equations using the fractional-step method from Perot (1993).
> > At each time-step, we solve two systems: one for the velocity field, the
> other, a Poisson system, for the pressure field.
> > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a 20x20
> grid using 1 or 2 procs.
> > For the Poisson system, we usually use CG preconditioned with GAMG.
> >
> > So far, we have been using PETSc-3.5.4, and we would like to update the
> code with the latest release: 3.7.4.
> >
> > As suggested in the changelog of 3.6, we replaced the routine
> `KSPSetNullSpace()` with `MatSetNullSpace()`.
> >
> > Here is the list of options we use to configure the two solvers:
> > * Velocity solver: prefix `-velocity_`
> >   -velocity_ksp_type bcgs
> >   -velocity_ksp_rtol 1.0E-08
> >   -velocity_ksp_atol 0.0
> >   -velocity_ksp_max_it 1
> >   -velocity_pc_type jacobi
> >   -velocity_ksp_view
> >   -velocity_ksp_monitor_true_residual
> >   -velocity_ksp_converged_reason
> > * Poisson solver: prefix `-poisson_`
> >   -poisson_ksp_type cg
> >   -poisson_ksp_rtol 1.0E-08
> >   -poisson_ksp_atol 0.0
> >   -poisson_ksp_max_it 2
> >   -poisson_pc_type gamg
> >   -poisson_pc_gamg_type agg
> >   -poisson_pc_gamg_agg_nsmooths 1
> >   -poissonksp_view
> >   -poisson_ksp_monitor_true_residual
> >   -poisson_ksp_converged_reason
> >
> > With 3.5.4, the case runs normally on 1 or 2 procs.
> > With 3.7.4, the case runs normally on 1 proc but not on 2.
> > Why? The Poisson solver diverges because of an indefinite preconditioner
> (only with 2 procs).
> >
> > We also saw that the routine `MatSetNullSpace()` was already available
> in 3.5.4.
> > With 3.5.4, replacing `KSPSetNullSpace()` with `MatSetNullSpace()` led
> to the Poisson solver diverging because of an indefinite matrix (on 1 and 2
> procs).
> >
> > Thus, we were wondering if we needed to update something else for the
> KSP, and not just modifying the name of the routine?
> >
> > I have attached the output files from 

Re: [petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Barry Smith

  Olivier,

In theory you do not need to change anything else. Are you using a 
different matrix object for the velocity_ksp object than the poisson_ksp object?

The code change in PETSc is very little but we have a report from another 
CFD user who also had problems with the change so there may be some subtle bug 
that we can't figure out causing things to not behave properly.

   First run the 3.7.4 code with -poisson_ksp_view and verify that when it 
prints the matrix information it prints something like has attached null space 
if it does not print that it means that somehow the matrix is not properly 
getting the matrix attached.

Though older versions had MatSetNullSpace() they didn't necessarily 
associate it with the KSP so it was not expected to work as a replacement for 
KSPSetNullSpace() with older versions.

Because our other user had great difficulty trying to debug the issue feel 
free to send us at petsc-ma...@mcs.anl.gov your code with instructions on 
building and running and we can try to track down the problem. Better than 
hours and hours spent with fruitless email. We will, of course, not distribute 
the code and will delete in when we are finished with it.

   Barry








> On Oct 25, 2016, at 4:38 PM, Olivier Mesnard  
> wrote:
> 
> Hi all,
> 
> We develop a CFD code using the PETSc library that solves the Navier-Stokes 
> equations using the fractional-step method from Perot (1993).
> At each time-step, we solve two systems: one for the velocity field, the 
> other, a Poisson system, for the pressure field.
> One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a 20x20 grid 
> using 1 or 2 procs.
> For the Poisson system, we usually use CG preconditioned with GAMG.
> 
> So far, we have been using PETSc-3.5.4, and we would like to update the code 
> with the latest release: 3.7.4.
> 
> As suggested in the changelog of 3.6, we replaced the routine 
> `KSPSetNullSpace()` with `MatSetNullSpace()`.
> 
> Here is the list of options we use to configure the two solvers:
> * Velocity solver: prefix `-velocity_`
>   -velocity_ksp_type bcgs
>   -velocity_ksp_rtol 1.0E-08
>   -velocity_ksp_atol 0.0
>   -velocity_ksp_max_it 1
>   -velocity_pc_type jacobi
>   -velocity_ksp_view
>   -velocity_ksp_monitor_true_residual
>   -velocity_ksp_converged_reason
> * Poisson solver: prefix `-poisson_`
>   -poisson_ksp_type cg
>   -poisson_ksp_rtol 1.0E-08
>   -poisson_ksp_atol 0.0
>   -poisson_ksp_max_it 2
>   -poisson_pc_type gamg
>   -poisson_pc_gamg_type agg
>   -poisson_pc_gamg_agg_nsmooths 1
>   -poissonksp_view
>   -poisson_ksp_monitor_true_residual
>   -poisson_ksp_converged_reason
> 
> With 3.5.4, the case runs normally on 1 or 2 procs.
> With 3.7.4, the case runs normally on 1 proc but not on 2.
> Why? The Poisson solver diverges because of an indefinite preconditioner 
> (only with 2 procs).
> 
> We also saw that the routine `MatSetNullSpace()` was already available in 
> 3.5.4.
> With 3.5.4, replacing `KSPSetNullSpace()` with `MatSetNullSpace()` led to the 
> Poisson solver diverging because of an indefinite matrix (on 1 and 2 procs).
> 
> Thus, we were wondering if we needed to update something else for the KSP, 
> and not just modifying the name of the routine?
> 
> I have attached the output files from the different cases:
> * `run-petsc-3.5.4-n1.log` (3.5.4, `KSPSetNullSpace()`, n=1)
> * `run-petsc-3.5.4-n2.log`
> * `run-petsc-3.5.4-nsp-n1.log` (3.5.4, `MatSetNullSpace()`, n=1)
> * `run-petsc-3.5.4-nsp-n2.log`
> * `run-petsc-3.7.4-n1.log` (3.7.4, `MatSetNullSpace()`, n=1)
> * `run-petsc-3.7.4-n2.log`
> 
> Thank you for your help,
> Olivier
> 



[petsc-users] Moving from KSPSetNullSpace to MatSetNullSpace

2016-10-25 Thread Olivier Mesnard
Hi all,

We develop a CFD code using the PETSc library that solves the Navier-Stokes
equations using the fractional-step method from Perot (1993).
At each time-step, we solve two systems: one for the velocity field, the
other, a Poisson system, for the pressure field.
One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a 20x20
grid using 1 or 2 procs.
For the Poisson system, we usually use CG preconditioned with GAMG.

So far, we have been using PETSc-3.5.4, and we would like to update the
code with the latest release: 3.7.4.

As suggested in the changelog of 3.6, we replaced the routine
`KSPSetNullSpace()` with `MatSetNullSpace()`.

Here is the list of options we use to configure the two solvers:
* Velocity solver: prefix `-velocity_`
  -velocity_ksp_type bcgs
  -velocity_ksp_rtol 1.0E-08
  -velocity_ksp_atol 0.0
  -velocity_ksp_max_it 1
  -velocity_pc_type jacobi
  -velocity_ksp_view
  -velocity_ksp_monitor_true_residual
  -velocity_ksp_converged_reason
* Poisson solver: prefix `-poisson_`
  -poisson_ksp_type cg
  -poisson_ksp_rtol 1.0E-08
  -poisson_ksp_atol 0.0
  -poisson_ksp_max_it 2
  -poisson_pc_type gamg
  -poisson_pc_gamg_type agg
  -poisson_pc_gamg_agg_nsmooths 1
  -poissonksp_view
  -poisson_ksp_monitor_true_residual
  -poisson_ksp_converged_reason

With 3.5.4, the case runs normally on 1 or 2 procs.
With 3.7.4, the case runs normally on 1 proc but not on 2.
Why? The Poisson solver diverges because of an indefinite preconditioner
(only with 2 procs).

We also saw that the routine `MatSetNullSpace()` was already available in
3.5.4.
With 3.5.4, replacing `KSPSetNullSpace()` with `MatSetNullSpace()` led to
the Poisson solver diverging because of an indefinite matrix (on 1 and 2
procs).

Thus, we were wondering if we needed to update something else for the KSP,
and not just modifying the name of the routine?

I have attached the output files from the different cases:
* `run-petsc-3.5.4-n1.log` (3.5.4, `KSPSetNullSpace()`, n=1)
* `run-petsc-3.5.4-n2.log`
* `run-petsc-3.5.4-nsp-n1.log` (3.5.4, `MatSetNullSpace()`, n=1)
* `run-petsc-3.5.4-nsp-n2.log`
* `run-petsc-3.7.4-n1.log` (3.7.4, `MatSetNullSpace()`, n=1)
* `run-petsc-3.7.4-n2.log`

Thank you for your help,
Olivier

==
*** PetIBM - Start ***
==
directory: ./

Parsing file .//cartesianMesh.yaml... done.

Parsing file .//flowDescription.yaml... done.

Parsing file .//simulationParameters.yaml... done.

---
Cartesian grid
---
number of cells: 20 x 20
---

---
Flow
---
dimensions: 2
viscosity: 0.01
initial velocity field:
	0
	0
boundary conditions (component, type, value):
	->location: xMinus (left)
		0 	 DIRICHLET 	 0
		1 	 DIRICHLET 	 0
	->location: xPlus (right)
		0 	 DIRICHLET 	 0
		1 	 DIRICHLET 	 0
	->location: yMinus (bottom)
		0 	 DIRICHLET 	 0
		1 	 DIRICHLET 	 0
	->location: yPlus (top)
		0 	 DIRICHLET 	 1
		1 	 DIRICHLET 	 0
---

---
Time-stepping
---
formulation: Navier-Stokes solver (Perot, 1993)
convection: Euler-explicit
diffusion: Euler-implicit
time-increment: 0.0005
starting time-step: 0
number of time-steps: 1
saving-interval: 1
---


KSP info: Velocity system

KSP Object:(velocity_) 1 MPI processes
  type: bcgs
  maximum iterations=1
  tolerances:  relative=1e-08, absolute=0, divergence=1
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(velocity_) 1 MPI processes
  type: jacobi
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
type: seqaij
rows=760, cols=760
total: nonzeros=3644, allocated nonzeros=3644
total number of mallocs used during MatSetValues calls =0
  not using I-node routines


KSP info: Poisson system

KSP Object:(poisson_) 1 MPI processes
  type: cg
  maximum iterations=2
  tolerances:  relative=1e-08, absolute=0, divergence=1
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(poisson_) 1 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
  Cycles per PCApply=0
  Using Galerkin computed coarse grid matrices
  linear system matrix = precond matrix:
  Mat Object:   1 MPI processes
type: seqaij
rows=400, cols=400
total: nonzeros=1920, allocated nonzeros=1920
total number of mallocs used during MatSetValues calls =0
  not using I-node 

Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Tzanio Kolev

>> Stefano Zampini
>> replied to the issue on github and was stating that there is some intention 
>> to get MFEM working with PETSc but there is no specific timeframe.
> 
> There’s currently an open pull request for PETSc solvers inside the private 
> MFEM repo.
> However, I don’t know when the code, if merged, will be released.

We do plan to merge this as part of mfem’s next official release (v3.3). 
Optimistically, that's targeted for the end of 2016 :)

Tzanio

Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 1:43 PM, Jed Brown  wrote:

> Stefano Zampini  writes:
> > MATHYPRE could be a shell wrapping Hypre calls for the moment.
> > However, HypreParCSR and MATAIJ are mostly equivalent formats. As far as
> I know, the main (only?) difference resides in the fact that the diagonal
> term of the diagonal part is ordered first in the CSR.
> > For this reason, I think it should inherit from AIJ.
>
> This is more delicate.  Derived classes need to *exactly* retain the AIJ
> structure because all unimplemented methods use the parent
> implementations.  If the rows are not sorted, MatSetValues, MatGetRow,
> and the like cease to work.  You can still make MatHypreParCSR respond
> to MatMPIAIJSetPreallocation, but I don't think it can be derived from
> AIJ unless you audit *all* reachable AIJ code to remove the assumption
> of sorted rows *and* document the API change for all users that could
> observe the lack of sorting.  (I don't think that's worthwhile.)
>
> Note that the existing AIJ derived implementations merely augment the
> AIJ structure rather than modifying it.
>


  Inheritance is almost never useful except in contrived textbook examples.
It was a tremendous pain to make
work in PETSc, and I think if we did it again, I would just go back and
make subobjects that packaged up lower
level behavior instead of inheriting.


   Matt

> As soon  as I have time, I can start a new matrix class, but I don’t have
> much time to implement at the SetValues level yet.
>
> That's not urgent, but if you write it as a Mat implementation instead
> of some utility functions, it would be easy to add later and would not
> disrupt existing users.  There is no requirement that all Mat
> implementations include all the methods that "make sense"; it can be
> fleshed out later according to demand.
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Jed Brown
Stefano Zampini  writes:
> MATHYPRE could be a shell wrapping Hypre calls for the moment.
> However, HypreParCSR and MATAIJ are mostly equivalent formats. As far as I 
> know, the main (only?) difference resides in the fact that the diagonal term 
> of the diagonal part is ordered first in the CSR.
> For this reason, I think it should inherit from AIJ.

This is more delicate.  Derived classes need to *exactly* retain the AIJ
structure because all unimplemented methods use the parent
implementations.  If the rows are not sorted, MatSetValues, MatGetRow,
and the like cease to work.  You can still make MatHypreParCSR respond
to MatMPIAIJSetPreallocation, but I don't think it can be derived from
AIJ unless you audit *all* reachable AIJ code to remove the assumption
of sorted rows *and* document the API change for all users that could
observe the lack of sorting.  (I don't think that's worthwhile.)

Note that the existing AIJ derived implementations merely augment the
AIJ structure rather than modifying it.

> As soon  as I have time, I can start a new matrix class, but I don’t have 
> much time to implement at the SetValues level yet.

That's not urgent, but if you write it as a Mat implementation instead
of some utility functions, it would be easy to add later and would not
disrupt existing users.  There is no requirement that all Mat
implementations include all the methods that "make sense"; it can be
fleshed out later according to demand.


signature.asc
Description: PGP signature


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Barry Smith

> On Oct 25, 2016, at 12:31 PM, Matthew Knepley  wrote:
> 
> On Tue, Oct 25, 2016 at 12:19 PM, Stefano Zampini  
> wrote:
> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
> I could add this code to PETSc, maybe in the contrib folder. Barry, what do 
> you think?
> 
> No, no one looks there. Add it to src/mat/utils and make an interface 
> function like MatCreateFromHypreParCSR().

   I agree with Matt on this.

> 
>   Thanks,
> 
>  Matt
>  
> > We tried a similar approach to get MFEM objects to PETSc and the real
> > problem is that all other convenient functions like creating
> > gridfunctions and projections, you have to convert them every time
> > which is basically nothing more than a bad workaround.
> 
> So far, my interface covers matrices and Krylov solvers (PCFieldSplit and 
> PCBDDC are explicitly supported).
> 
> Can you tell me how would you like to use these objects with PETSc? What you 
> would like to achieve?
> 
> So far, my work on the PETSc interface to MFEM originated from a wishlist for 
> solvers, but I could expand it.
> 
> 
> > Stefano Zampini
> > replied to the issue on github and was stating that there is some
> > intention to get MFEM working with PETSc but there is no specific
> > timeframe.
> >
> 
> There’s currently an open pull request for PETSc solvers inside the private 
> MFEM repo.
> However, I don’t know when the code, if merged, will be released.
> 
> 
> > Regards
> > Julian Andrej
> >
> > On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
> >  wrote:
> >> I will check that. I am preallocating but it may be that I am not 
> >> allocating
> >> big enough. I still have to figure out nuance differences between these
> >> formats to solve the bugs. I appreciate your answer and hope Satish knows
> >> it.
> >>
> >> Thank you,
> >> Abdullah Ali Sivas
> >>
> >>
> >> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
> >>
> >> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
> >>  wrote:
> >>>
> >>> Hello,
> >>>
> >>> I want to use PETSc with mfem and I know that mfem people will figure out
> >>> a way to do it in few months. But for now as a temporary solution I just
> >>> thought of converting hypre PARCSR matrices (that is what mfem uses as
> >>> linear solver package) into PETSc MPIAIJ matrices and I have a 
> >>> semi-working
> >>> code with some bugs. Also my code is dauntingly slow and seems like not
> >>> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
> >>> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting 
> >>> points.
> >>> Before starting I checked whether there was anything done similar to 
> >>> this, I
> >>> could not find anything.
> >>>
> >>> My question is, are you aware of such a conversion code (i.e. something
> >>> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
> >>
> >> No, but maybe Satish knows. Slow running times most likely come from lack 
> >> of
> >> preallocation for the target matrix.
> >>
> >>  Thanks,
> >>
> >> Matt
> >>>
> >>> Thanks in advance,
> >>> Abdullah Ali Sivas
> >>
> >>
> >>
> >>
> >> --
> >> What most experimenters take for granted before they begin their 
> >> experiments
> >> is infinitely more interesting than any results to which their experiments
> >> lead.
> >> -- Norbert Wiener
> >>
> >>
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener



Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Stefano Zampini

On Oct 25, 2016, at 9:10 PM, Julian Andrej  wrote:

> We have an implementation which models a real physical application but
> don't have the manpower to implement different preconditioner themes
> (like with fieldsplit) or try out different time solving schemes
> (which is way too easy with TS).

SNES and TS are on my TODOLIST. We can discuss this.

> Specifically we create a bunch of
> operators (Boundary and Face integrators) which model
> actuator/observer domains on a predefined mesh.
> 

At the moment, PETSc matrices are created during the MFEM assemble call; in the 
MFEM spirit, they are obtained as R A P operations.
mfem::BlockOperator is fully supported, and MATNEST is created on the fly. 
MATIS is also supported (for PCBDDC)

You can mail me directly if you think this discussion is getting too technical 
for the PETSc mailing list.
 

> On Tue, Oct 25, 2016 at 7:19 PM, Stefano Zampini
>  wrote:
>> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
>> I could add this code to PETSc, maybe in the contrib folder. Barry, what do 
>> you think?
>> 
>> 
>>> We tried a similar approach to get MFEM objects to PETSc and the real
>>> problem is that all other convenient functions like creating
>>> gridfunctions and projections, you have to convert them every time
>>> which is basically nothing more than a bad workaround.
>> 
>> So far, my interface covers matrices and Krylov solvers (PCFieldSplit and 
>> PCBDDC are explicitly supported).
>> 
>> Can you tell me how would you like to use these objects with PETSc? What you 
>> would like to achieve?
>> 
>> So far, my work on the PETSc interface to MFEM originated from a wishlist 
>> for solvers, but I could expand it.
>> 
>> 
>>> Stefano Zampini
>>> replied to the issue on github and was stating that there is some
>>> intention to get MFEM working with PETSc but there is no specific
>>> timeframe.
>>> 
>> 
>> There’s currently an open pull request for PETSc solvers inside the private 
>> MFEM repo.
>> However, I don’t know when the code, if merged, will be released.
>> 
>> 
>>> Regards
>>> Julian Andrej
>>> 
>>> On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
>>>  wrote:
 I will check that. I am preallocating but it may be that I am not 
 allocating
 big enough. I still have to figure out nuance differences between these
 formats to solve the bugs. I appreciate your answer and hope Satish knows
 it.
 
 Thank you,
 Abdullah Ali Sivas
 
 
 On 2016-10-25 12:15 PM, Matthew Knepley wrote:
 
 On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
  wrote:
> 
> Hello,
> 
> I want to use PETSc with mfem and I know that mfem people will figure out
> a way to do it in few months. But for now as a temporary solution I just
> thought of converting hypre PARCSR matrices (that is what mfem uses as
> linear solver package) into PETSc MPIAIJ matrices and I have a 
> semi-working
> code with some bugs. Also my code is dauntingly slow and seems like not
> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting 
> points.
> Before starting I checked whether there was anything done similar to 
> this, I
> could not find anything.
> 
> My question is, are you aware of such a conversion code (i.e. something
> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
 
 No, but maybe Satish knows. Slow running times most likely come from lack 
 of
 preallocation for the target matrix.
 
 Thanks,
 
Matt
> 
> Thanks in advance,
> Abdullah Ali Sivas
 
 
 
 
 --
 What most experimenters take for granted before they begin their 
 experiments
 is infinitely more interesting than any results to which their experiments
 lead.
 -- Norbert Wiener
 
 
>> 



Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Julian Andrej
We have an implementation which models a real physical application but
don't have the manpower to implement different preconditioner themes
(like with fieldsplit) or try out different time solving schemes
(which is way too easy with TS). Specifically we create a bunch of
operators (Boundary and Face integrators) which model
actuator/observer domains on a predefined mesh.

On Tue, Oct 25, 2016 at 7:19 PM, Stefano Zampini
 wrote:
> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
> I could add this code to PETSc, maybe in the contrib folder. Barry, what do 
> you think?
>
>
>> We tried a similar approach to get MFEM objects to PETSc and the real
>> problem is that all other convenient functions like creating
>> gridfunctions and projections, you have to convert them every time
>> which is basically nothing more than a bad workaround.
>
> So far, my interface covers matrices and Krylov solvers (PCFieldSplit and 
> PCBDDC are explicitly supported).
>
> Can you tell me how would you like to use these objects with PETSc? What you 
> would like to achieve?
>
> So far, my work on the PETSc interface to MFEM originated from a wishlist for 
> solvers, but I could expand it.
>
>
>> Stefano Zampini
>> replied to the issue on github and was stating that there is some
>> intention to get MFEM working with PETSc but there is no specific
>> timeframe.
>>
>
> There’s currently an open pull request for PETSc solvers inside the private 
> MFEM repo.
> However, I don’t know when the code, if merged, will be released.
>
>
>> Regards
>> Julian Andrej
>>
>> On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
>>  wrote:
>>> I will check that. I am preallocating but it may be that I am not allocating
>>> big enough. I still have to figure out nuance differences between these
>>> formats to solve the bugs. I appreciate your answer and hope Satish knows
>>> it.
>>>
>>> Thank you,
>>> Abdullah Ali Sivas
>>>
>>>
>>> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
>>>
>>> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
>>>  wrote:

 Hello,

 I want to use PETSc with mfem and I know that mfem people will figure out
 a way to do it in few months. But for now as a temporary solution I just
 thought of converting hypre PARCSR matrices (that is what mfem uses as
 linear solver package) into PETSc MPIAIJ matrices and I have a semi-working
 code with some bugs. Also my code is dauntingly slow and seems like not
 scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
 hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting 
 points.
 Before starting I checked whether there was anything done similar to this, 
 I
 could not find anything.

 My question is, are you aware of such a conversion code (i.e. something
 like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
>>>
>>> No, but maybe Satish knows. Slow running times most likely come from lack of
>>> preallocation for the target matrix.
>>>
>>>  Thanks,
>>>
>>> Matt

 Thanks in advance,
 Abdullah Ali Sivas
>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their experiments
>>> is infinitely more interesting than any results to which their experiments
>>> lead.
>>> -- Norbert Wiener
>>>
>>>
>


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Stefano Zampini

On Oct 25, 2016, at 8:50 PM, Jed Brown  wrote:

> Matthew Knepley  writes:
> 
>> On Tue, Oct 25, 2016 at 12:19 PM, Stefano Zampini >> wrote:
>> 
>>> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
>>> I could add this code to PETSc, maybe in the contrib folder. Barry, what
>>> do you think?
>>> 
>> 
>> No, no one looks there. Add it to src/mat/utils and make an interface
>> function like MatCreateFromHypreParCSR().
> 
> Note that mhyp.c contains code to convert AIJ matrices to ParCSR.  If we
> were to create a MatHypreParCSR implementation, we could use those
> functions for MatConvert_{Seq,MPI}AIJ_HypreParCSR and use your function
> for the reverse.  That would be consistent with how external matrix
> formats are normally represented and may enable some new capability to
> mix PETSc and Hypre components in the future.  Here, I'm envisioning
> 
>  PetscErrorCode MatCreateHypreParCSR(hyper_ParCSRMatrix *A,Mat *B);

> 
> This way, if a user chooses -pc_type hypre, there would be no copies for
> going through PETSc.  Similarly, if we implement
> MatSetValues_HypreParCSR, a pure PETSc application could use Hypre
> preconditioners with no copies.

MATHYPRE could be a shell wrapping Hypre calls for the moment.
However, HypreParCSR and MATAIJ are mostly equivalent formats. As far as I 
know, the main (only?) difference resides in the fact that the diagonal term of 
the diagonal part is ordered first in the CSR.
For this reason, I think it should inherit from AIJ.

As soon  as I have time, I can start a new matrix class, but I don’t have much 
time to implement at the SetValues level yet.





Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 12:50 PM, Jed Brown  wrote:

> Matthew Knepley  writes:
>
> > On Tue, Oct 25, 2016 at 12:19 PM, Stefano Zampini <
> stefano.zamp...@gmail.com
> >> wrote:
> >
> >> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
> >> I could add this code to PETSc, maybe in the contrib folder. Barry, what
> >> do you think?
> >>
> >
> > No, no one looks there. Add it to src/mat/utils and make an interface
> > function like MatCreateFromHypreParCSR().
>
> Note that mhyp.c contains code to convert AIJ matrices to ParCSR.  If we
> were to create a MatHypreParCSR implementation, we could use those
> functions for MatConvert_{Seq,MPI}AIJ_HypreParCSR and use your function
> for the reverse.  That would be consistent with how external matrix
> formats are normally represented and may enable some new capability to
> mix PETSc and Hypre components in the future.  Here, I'm envisioning
>
>   PetscErrorCode MatCreateHypreParCSR(hyper_ParCSRMatrix *A,Mat *B);
>
> This way, if a user chooses -pc_type hypre, there would be no copies for
> going through PETSc.  Similarly, if we implement
> MatSetValues_HypreParCSR, a pure PETSc application could use Hypre
> preconditioners with no copies.
>

This is a better way, but I did not suggest it because it entails the extra
work of implementing
the Mat methods for the new class. If Stefano has time for that, fantastic.

  Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Abdullah Ali Sivas

@Stefano Zampini:

I am planning to do two things. One is to directly use Krylov solvers 
and preconditioners available to PETSc to try out some stuff and make 
some matrix manipulations like symmetric diagonal scaling or getting a 
submatrix. If that works I will implement few things (preconditioners or 
other krylov solvers) using PETSc and try them out. So what you did is 
like a blessing for me (basically because I was working on the very same 
thing for days now) and thank you for that.



@Mark and Jed

These are great ideas and I believe a lot of users like me will be 
grateful if these are available.


On 2016-10-25 01:50 PM, Jed Brown wrote:

Matthew Knepley  writes:


On Tue, Oct 25, 2016 at 12:19 PM, Stefano Zampini 

Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Jed Brown
Matthew Knepley  writes:

> On Tue, Oct 25, 2016 at 12:19 PM, Stefano Zampini > wrote:
>
>> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
>> I could add this code to PETSc, maybe in the contrib folder. Barry, what
>> do you think?
>>
>
> No, no one looks there. Add it to src/mat/utils and make an interface
> function like MatCreateFromHypreParCSR().

Note that mhyp.c contains code to convert AIJ matrices to ParCSR.  If we
were to create a MatHypreParCSR implementation, we could use those
functions for MatConvert_{Seq,MPI}AIJ_HypreParCSR and use your function
for the reverse.  That would be consistent with how external matrix
formats are normally represented and may enable some new capability to
mix PETSc and Hypre components in the future.  Here, I'm envisioning

  PetscErrorCode MatCreateHypreParCSR(hyper_ParCSRMatrix *A,Mat *B);

This way, if a user chooses -pc_type hypre, there would be no copies for
going through PETSc.  Similarly, if we implement
MatSetValues_HypreParCSR, a pure PETSc application could use Hypre
preconditioners with no copies.


signature.asc
Description: PGP signature


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 12:19 PM, Stefano Zampini  wrote:

> I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
> I could add this code to PETSc, maybe in the contrib folder. Barry, what
> do you think?
>

No, no one looks there. Add it to src/mat/utils and make an interface
function like MatCreateFromHypreParCSR().

  Thanks,

 Matt


> > We tried a similar approach to get MFEM objects to PETSc and the real
> > problem is that all other convenient functions like creating
> > gridfunctions and projections, you have to convert them every time
> > which is basically nothing more than a bad workaround.
>
> So far, my interface covers matrices and Krylov solvers (PCFieldSplit and
> PCBDDC are explicitly supported).
>
> Can you tell me how would you like to use these objects with PETSc? What
> you would like to achieve?
>
> So far, my work on the PETSc interface to MFEM originated from a wishlist
> for solvers, but I could expand it.
>
>
> > Stefano Zampini
> > replied to the issue on github and was stating that there is some
> > intention to get MFEM working with PETSc but there is no specific
> > timeframe.
> >
>
> There’s currently an open pull request for PETSc solvers inside the
> private MFEM repo.
> However, I don’t know when the code, if merged, will be released.
>
>
> > Regards
> > Julian Andrej
> >
> > On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
> >  wrote:
> >> I will check that. I am preallocating but it may be that I am not
> allocating
> >> big enough. I still have to figure out nuance differences between these
> >> formats to solve the bugs. I appreciate your answer and hope Satish
> knows
> >> it.
> >>
> >> Thank you,
> >> Abdullah Ali Sivas
> >>
> >>
> >> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
> >>
> >> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
> >>  wrote:
> >>>
> >>> Hello,
> >>>
> >>> I want to use PETSc with mfem and I know that mfem people will figure
> out
> >>> a way to do it in few months. But for now as a temporary solution I
> just
> >>> thought of converting hypre PARCSR matrices (that is what mfem uses as
> >>> linear solver package) into PETSc MPIAIJ matrices and I have a
> semi-working
> >>> code with some bugs. Also my code is dauntingly slow and seems like not
> >>> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
> >>> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting
> points.
> >>> Before starting I checked whether there was anything done similar to
> this, I
> >>> could not find anything.
> >>>
> >>> My question is, are you aware of such a conversion code (i.e. something
> >>> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
> >>
> >> No, but maybe Satish knows. Slow running times most likely come from
> lack of
> >> preallocation for the target matrix.
> >>
> >>  Thanks,
> >>
> >> Matt
> >>>
> >>> Thanks in advance,
> >>> Abdullah Ali Sivas
> >>
> >>
> >>
> >>
> >> --
> >> What most experimenters take for granted before they begin their
> experiments
> >> is infinitely more interesting than any results to which their
> experiments
> >> lead.
> >> -- Norbert Wiener
> >>
> >>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Stefano Zampini
I have a working conversion from HypreParCSR to PETSc MPIAIJ format.
I could add this code to PETSc, maybe in the contrib folder. Barry, what do you 
think?


> We tried a similar approach to get MFEM objects to PETSc and the real
> problem is that all other convenient functions like creating
> gridfunctions and projections, you have to convert them every time
> which is basically nothing more than a bad workaround.

So far, my interface covers matrices and Krylov solvers (PCFieldSplit and 
PCBDDC are explicitly supported).

Can you tell me how would you like to use these objects with PETSc? What you 
would like to achieve?

So far, my work on the PETSc interface to MFEM originated from a wishlist for 
solvers, but I could expand it.


> Stefano Zampini
> replied to the issue on github and was stating that there is some
> intention to get MFEM working with PETSc but there is no specific
> timeframe.
> 

There’s currently an open pull request for PETSc solvers inside the private 
MFEM repo.
However, I don’t know when the code, if merged, will be released.


> Regards
> Julian Andrej
> 
> On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
>  wrote:
>> I will check that. I am preallocating but it may be that I am not allocating
>> big enough. I still have to figure out nuance differences between these
>> formats to solve the bugs. I appreciate your answer and hope Satish knows
>> it.
>> 
>> Thank you,
>> Abdullah Ali Sivas
>> 
>> 
>> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
>> 
>> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
>>  wrote:
>>> 
>>> Hello,
>>> 
>>> I want to use PETSc with mfem and I know that mfem people will figure out
>>> a way to do it in few months. But for now as a temporary solution I just
>>> thought of converting hypre PARCSR matrices (that is what mfem uses as
>>> linear solver package) into PETSc MPIAIJ matrices and I have a semi-working
>>> code with some bugs. Also my code is dauntingly slow and seems like not
>>> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
>>> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting points.
>>> Before starting I checked whether there was anything done similar to this, I
>>> could not find anything.
>>> 
>>> My question is, are you aware of such a conversion code (i.e. something
>>> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
>> 
>> No, but maybe Satish knows. Slow running times most likely come from lack of
>> preallocation for the target matrix.
>> 
>>  Thanks,
>> 
>> Matt
>>> 
>>> Thanks in advance,
>>> Abdullah Ali Sivas
>> 
>> 
>> 
>> 
>> --
>> What most experimenters take for granted before they begin their experiments
>> is infinitely more interesting than any results to which their experiments
>> lead.
>> -- Norbert Wiener
>> 
>> 



Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Julian Andrej
We tried a similar approach to get MFEM objects to PETSc and the real
problem is that all other convenient functions like creating
gridfunctions and projections, you have to convert them every time
which is basically nothing more than a bad workaround. Stefano Zampini
replied to the issue on github and was stating that there is some
intention to get MFEM working with PETSc but there is no specific
timeframe.

Regards
Julian Andrej

On Tue, Oct 25, 2016 at 6:30 PM, Abdullah Ali Sivas
 wrote:
> I will check that. I am preallocating but it may be that I am not allocating
> big enough. I still have to figure out nuance differences between these
> formats to solve the bugs. I appreciate your answer and hope Satish knows
> it.
>
> Thank you,
> Abdullah Ali Sivas
>
>
> On 2016-10-25 12:15 PM, Matthew Knepley wrote:
>
> On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas
>  wrote:
>>
>> Hello,
>>
>> I want to use PETSc with mfem and I know that mfem people will figure out
>> a way to do it in few months. But for now as a temporary solution I just
>> thought of converting hypre PARCSR matrices (that is what mfem uses as
>> linear solver package) into PETSc MPIAIJ matrices and I have a semi-working
>> code with some bugs. Also my code is dauntingly slow and seems like not
>> scaling. I have used MatHYPRE_IJMatrixCopy from myhp.c of PETSc and
>> hypre_ParCSRMatrixPrintIJ from par_csr_matrix.c of hypre as starting points.
>> Before starting I checked whether there was anything done similar to this, I
>> could not find anything.
>>
>> My question is, are you aware of such a conversion code (i.e. something
>> like hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, Mat *A)?
>
> No, but maybe Satish knows. Slow running times most likely come from lack of
> preallocation for the target matrix.
>
>   Thanks,
>
>  Matt
>>
>> Thanks in advance,
>> Abdullah Ali Sivas
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener
>
>


Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Abdullah Ali Sivas
I will check that. I am preallocating but it may be that I am not 
allocating big enough. I still have to figure out nuance differences 
between these formats to solve the bugs. I appreciate your answer and 
hope Satish knows it.


Thank you,
Abdullah Ali Sivas


On 2016-10-25 12:15 PM, Matthew Knepley wrote:
On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas 
> wrote:


Hello,

I want to use PETSc with mfem and I know that mfem people will
figure out a way to do it in few months. But for now as a
temporary solution I just thought of converting hypre PARCSR
matrices (that is what mfem uses as linear solver package) into
PETSc MPIAIJ matrices and I have a semi-working code with some
bugs. Also my code is dauntingly slow and seems like not scaling.
I have used /MatHYPRE_IJMatrixCopy /from myhp.c of PETSc and
/hypre_ParCSRMatrixPrintIJ/ from par_csr_matrix.c of hypre as
starting points. Before starting I checked whether there was
anything done similar to this, I could not find anything.

My question is, are you aware of such a conversion code (i.e.
something like /hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix
*matrix, //Mat *A/)?

No, but maybe Satish knows. Slow running times most likely come from 
lack of preallocation for the target matrix.


  Thanks,

 Matt

Thanks in advance,
Abdullah Ali Sivas




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener




Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-25 Thread Hong
Sherry,

We set '-mat_superlu_dist_fact SamePattern'  as default in
petsc/superlu_dist on 12/6/15 (see attached email below).

However, Anton must set 'SamePattern_SameRowPerm' to avoid crash in his
code. Checking
http://crd-legacy.lbl.gov/~xiaoye/SuperLU/superlu_dist_code_html/pzgssvx___a_bglobal_8c.html
I see detailed description on using SamePattern_SameRowPerm, which requires
more from user than SamePattern. I guess these flags are used for
efficiency. The library sets a default, then have users to switch for their
own applications. The default setting should not cause crash. If crash
occurs, give a meaningful error message would be help.

Do you have suggestion how should we set default in petsc for this flag?

Hong

---
Hong 
12/7/15
to Danyang, petsc-maint, PETSc, Xiaoye
Danyang :

Adding '-mat_superlu_dist_fact SamePattern' fixed the problem. Below is how
I figured it out.

1. Reading ex52f.F, I see '-superlu_default' =
'-pc_factor_mat_solver_package superlu_dist', the later enables runtime
options for other packages. I use superlu_dist-4.2 and superlu-4.1 for the
tests below.
...
5.
Using a_flow_check_1.bin, I am able to reproduce the error you reported:
all packages give correct results except superlu_dist:
./ex52f -f0 matrix_and_rhs_bin/a_flow_check_1.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu -pc_factor_mat_solver_package
superlu_dist
Norm of error  2.5970E-12 iterations 1
 -->Test for matrix  168
Norm of error  1.3936E-01 iterations34
 -->Test for matrix  169

I guess the error might come from reuse of matrix factor. Replacing default
-mat_superlu_dist_fact  with
-mat_superlu_dist_fact SamePattern, I get

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_1.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu -pc_factor_mat_solver_package
superlu_dist -mat_superlu_dist_fact SamePattern

Norm of error  2.5970E-12 iterations 1
 -->Test for matrix  168
...
Sherry may tell you why SamePattern_SameRowPerm cause the difference here.
Best on the above experiments, I would set following as default
'-mat_superlu_diagpivotthresh 0.0' in petsc/superlu interface.
'-mat_superlu_dist_fact SamePattern' in petsc/superlu_dist interface.

Hong

On Tue, Oct 25, 2016 at 10:38 AM, Hong  wrote:

> Anton,
> I guess, when you reuse matrix and its symbolic factor with updated
> numerical values, superlu_dist requires this option. I'm cc'ing Sherry to
> confirm it.
>
> I'll check petsc/superlu-dist interface to set this flag for this case.
>
> Hong
>
>
> On Tue, Oct 25, 2016 at 8:20 AM, Anton Popov  wrote:
>
>> Hong,
>>
>> I get all the problems gone and valgrind-clean output if I specify this:
>>
>> -mat_superlu_dist_fact SamePattern_SameRowPerm
>> What does SamePattern_SameRowPerm actually mean?
>> Row permutations are for large diagonal, column permutations are for
>> sparsity, right?
>> Will it skip subsequent matrix permutations for large diagonal even if
>> matrix values change significantly?
>>
>> Surprisingly everything works even with:
>>
>> -mat_superlu_dist_colperm PARMETIS
>> -mat_superlu_dist_parsymbfact TRUE
>>
>> Thanks,
>> Anton
>>
>> On 10/24/2016 09:06 PM, Hong wrote:
>>
>> Anton:
>>>
>>> If replacing superlu_dist with mumps, does your code work?
>>>
>>> yes
>>>
>>
>> You may use mumps in your code, or tests different options for
>> superlu_dist:
>>
>>   -mat_superlu_dist_equil:  Equilibrate matrix (None)
>>   -mat_superlu_dist_rowperm  Row permutation (choose one of)
>> LargeDiag NATURAL (None)
>>   -mat_superlu_dist_colperm  Column permutation (choose
>> one of) NATURAL MMD_AT_PLUS_A MMD_ATA METIS_AT_PLUS_A PARMETIS (None)
>>   -mat_superlu_dist_replacetinypivot:  Replace tiny pivots (None)
>>   -mat_superlu_dist_parsymbfact:  Parallel symbolic factorization
>> (None)
>>   -mat_superlu_dist_fact  Sparsity pattern for repeated
>> matrix factorization (choose one of) SamePattern SamePattern_SameRowPerm
>> (None)
>>
>> The options inside <> are defaults. You may try others. This might help
>> narrow down the bug.
>>
>> Hong
>>
>>>
>>> Hong

 On 10/24/2016 05:47 PM, Hong wrote:

 Barry,
 Your change indeed fixed the error of his testing code.
 As Satish tested, on your branch, ex16 runs smooth.

 I do not understand why on maint or master branch, ex16 creases inside
 superlu_dist, but not with mumps.


 I also confirm that ex16 runs fine with latest fix, but unfortunately
 not my code.

 This is something to be expected, since my code preallocates once in
 the beginning. So there is no way it can be affected by multiple
 preallocations. Subsequently I only do matrix assembly, that makes sure
 structure doesn't change (set to get error otherwise).

 Summary: we don't have a simple 

Re: [petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 10:54 AM, Abdullah Ali Sivas <
abdullahasi...@gmail.com> wrote:

> Hello,
>
> I want to use PETSc with mfem and I know that mfem people will figure out
> a way to do it in few months. But for now as a temporary solution I just
> thought of converting hypre PARCSR matrices (that is what mfem uses as
> linear solver package) into PETSc MPIAIJ matrices and I have a semi-working
> code with some bugs. Also my code is dauntingly slow and seems like not
> scaling. I have used *MatHYPRE_IJMatrixCopy *from myhp.c of PETSc and
> *hypre_ParCSRMatrixPrintIJ* from par_csr_matrix.c of hypre as starting
> points. Before starting I checked whether there was anything done similar
> to this, I could not find anything.
>
> My question is, are you aware of such a conversion code (i.e. something
> like *hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, **Mat *A*)?
>
No, but maybe Satish knows. Slow running times most likely come from lack
of preallocation for the target matrix.

  Thanks,

 Matt

> Thanks in advance,
> Abdullah Ali Sivas
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


[petsc-users] Using PETSc solvers and preconditioners with mfem

2016-10-25 Thread Abdullah Ali Sivas

Hello,

I want to use PETSc with mfem and I know that mfem people will figure 
out a way to do it in few months. But for now as a temporary solution I 
just thought of converting hypre PARCSR matrices (that is what mfem uses 
as linear solver package) into PETSc MPIAIJ matrices and I have a 
semi-working code with some bugs. Also my code is dauntingly slow and 
seems like not scaling. I have used /MatHYPRE_IJMatrixCopy /from myhp.c 
of PETSc and /hypre_ParCSRMatrixPrintIJ/ from par_csr_matrix.c of hypre 
as starting points. Before starting I checked whether there was anything 
done similar to this, I could not find anything.


My question is, are you aware of such a conversion code (i.e. something 
like /hypre_ParCSRtoPETScMPIAIJ( hypre_ParCSRMatrix *matrix, //Mat *A/)?



Thanks in advance,
Abdullah Ali Sivas



Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-25 Thread Hong
Anton,
I guess, when you reuse matrix and its symbolic factor with updated
numerical values, superlu_dist requires this option. I'm cc'ing Sherry to
confirm it.

I'll check petsc/superlu-dist interface to set this flag for this case.

Hong

On Tue, Oct 25, 2016 at 8:20 AM, Anton Popov  wrote:

> Hong,
>
> I get all the problems gone and valgrind-clean output if I specify this:
>
> -mat_superlu_dist_fact SamePattern_SameRowPerm
> What does SamePattern_SameRowPerm actually mean?
> Row permutations are for large diagonal, column permutations are for
> sparsity, right?
> Will it skip subsequent matrix permutations for large diagonal even if
> matrix values change significantly?
>
> Surprisingly everything works even with:
>
> -mat_superlu_dist_colperm PARMETIS
> -mat_superlu_dist_parsymbfact TRUE
>
> Thanks,
> Anton
>
> On 10/24/2016 09:06 PM, Hong wrote:
>
> Anton:
>>
>> If replacing superlu_dist with mumps, does your code work?
>>
>> yes
>>
>
> You may use mumps in your code, or tests different options for
> superlu_dist:
>
>   -mat_superlu_dist_equil:  Equilibrate matrix (None)
>   -mat_superlu_dist_rowperm  Row permutation (choose one of)
> LargeDiag NATURAL (None)
>   -mat_superlu_dist_colperm  Column permutation (choose
> one of) NATURAL MMD_AT_PLUS_A MMD_ATA METIS_AT_PLUS_A PARMETIS (None)
>   -mat_superlu_dist_replacetinypivot:  Replace tiny pivots (None)
>   -mat_superlu_dist_parsymbfact:  Parallel symbolic factorization
> (None)
>   -mat_superlu_dist_fact  Sparsity pattern for repeated
> matrix factorization (choose one of) SamePattern SamePattern_SameRowPerm
> (None)
>
> The options inside <> are defaults. You may try others. This might help
> narrow down the bug.
>
> Hong
>
>>
>> Hong
>>>
>>> On 10/24/2016 05:47 PM, Hong wrote:
>>>
>>> Barry,
>>> Your change indeed fixed the error of his testing code.
>>> As Satish tested, on your branch, ex16 runs smooth.
>>>
>>> I do not understand why on maint or master branch, ex16 creases inside
>>> superlu_dist, but not with mumps.
>>>
>>>
>>> I also confirm that ex16 runs fine with latest fix, but unfortunately
>>> not my code.
>>>
>>> This is something to be expected, since my code preallocates once in the
>>> beginning. So there is no way it can be affected by multiple
>>> preallocations. Subsequently I only do matrix assembly, that makes sure
>>> structure doesn't change (set to get error otherwise).
>>>
>>> Summary: we don't have a simple test code to debug superlu issue anymore.
>>>
>>> Anton
>>>
>>> Hong
>>>
>>> On Mon, Oct 24, 2016 at 9:34 AM, Satish Balay  wrote:
>>>
 On Mon, 24 Oct 2016, Barry Smith wrote:

 >
 > > [Or perhaps Hong is using a different test code and is observing
 bugs
 > > with superlu_dist interface..]
 >
 >She states that her test does a NEW MatCreate() for each matrix
 load (I cut and pasted it in the email I just sent). The bug I fixed was
 only related to using the SAME matrix from one MatLoad() in another
 MatLoad().

 Ah - ok.. Sorry - wasn't thinking clearly :(

 Satish

>>>
>>>
>>>
>>
>>
>
>


Re: [petsc-users] error with wrong tarball in path/to/package

2016-10-25 Thread Satish Balay
>>>
Looking for ML at git.ml, hg.ml or a directory starting 
with petsc-pkg-ml
Could not locate an existing copy of ML:
  ['ml-6.2']
<<<

So configure was looking for something with 'petsc-pkg-ml' - and it
did not find it. [there was 'ml-6.2' - but configure doesn't know what
it is..]

Yeah the message could be more fine-grained - will chek.

Satish

On Tue, 25 Oct 2016, Klaij, Christiaan wrote:

> Satish,
> 
> Fair enough, thanks for explaining. 
> 
> As far as I can tell the configure log (attached) gives the same error.
> 
> If the current version/format is not found, why not just say so
> in the error message? Saying "unable to download" suggests
> something's wrong with the internet connection, or file path.
> 
> Chris
> 
> From: Satish Balay 
> Sent: Tuesday, October 25, 2016 4:01 PM
> To: Klaij, Christiaan
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] error with wrong tarball in path/to/package
> 
> Always look in configure.log to see the exact error.
> 
> No - configure does not do checksums - but it expcects the package to
> be in a certain format [and this can change between petsc versions].
> So if you are using a url from petsc-3.5 - with 3.7 -- it might not
> work..
> 
> So for any version of petsc - its always best to use the default
> externalpacakge URLs [for some externalpacakges - different versions
> might work - but usually thats not tested].
> 
> configure attempts to determine the exact error - and attempts to
> print appropriate message - but thats not always possible to figureout
> - so its best to check configure.log to see the exact issue..
> 
> Note: to print the 'wrong version message' - it needs to know & keep
> track of the previous versions [and formats - if any] - and thats not
> easy.. All it can do is check for - current version/format is found or
> not..
> 
> Satish
> 
> On Tue, 25 Oct 2016, Klaij, Christiaan wrote:
> 
> >
> > Here is a small complaint about the error message "unable to
> > download" that is given when using
> > --download-PACKAGENAME=/PATH/TO/package.tar.gz with the wrong
> > tarball. For example, for my previous install, I was using
> > petsc-3.5.3 with:
> >
> > --download-ml=/path/to/ml-6.2-win.tar.gz
> >
> > Using the same file with 3.7.4 gives this error message
> >
> > ***
> >  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
> > details):
> > ---
> > Unable to download ML
> > Failed to download ML
> > ***
> >
> > My guess from ml.py is that I should now download:
> >
> > https://bitbucket.org/petsc/pkg-ml/get/v6.2-p4.tar.gz
> >
> > and that you are somehow checking that the file specified in the
> > path matches this file (name, hash, ...)?
> >
> > If so, "unable to download" is a bit confusing, I wasted some
> > time looking at the file system and the file:// protocol, and
> > annoyed the sysadmins... May I suggest to replace the message
> > with "wrong version" or something?
> >
> > Chris
> >
> >
> > dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> > MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> >
> > MARIN news: 
> > http://www.marin.nl/web/News/News-items/Workshop-Optimaliseren-is-ook-innoveren-15-november.htm
> >
> >
> 
> 



Re: [petsc-users] error with wrong tarball in path/to/package

2016-10-25 Thread Satish Balay
Always look in configure.log to see the exact error.

No - configure does not do checksums - but it expcects the package to
be in a certain format [and this can change between petsc versions].
So if you are using a url from petsc-3.5 - with 3.7 -- it might not
work..

So for any version of petsc - its always best to use the default
externalpacakge URLs [for some externalpacakges - different versions
might work - but usually thats not tested].

configure attempts to determine the exact error - and attempts to
print appropriate message - but thats not always possible to figureout
- so its best to check configure.log to see the exact issue..

Note: to print the 'wrong version message' - it needs to know & keep
track of the previous versions [and formats - if any] - and thats not
easy.. All it can do is check for - current version/format is found or
not..

Satish

On Tue, 25 Oct 2016, Klaij, Christiaan wrote:

> 
> Here is a small complaint about the error message "unable to
> download" that is given when using
> --download-PACKAGENAME=/PATH/TO/package.tar.gz with the wrong
> tarball. For example, for my previous install, I was using
> petsc-3.5.3 with:
> 
> --download-ml=/path/to/ml-6.2-win.tar.gz
> 
> Using the same file with 3.7.4 gives this error message
> 
> ***
>  UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
> details):
> ---
> Unable to download ML
> Failed to download ML
> ***
> 
> My guess from ml.py is that I should now download:
> 
> https://bitbucket.org/petsc/pkg-ml/get/v6.2-p4.tar.gz
> 
> and that you are somehow checking that the file specified in the
> path matches this file (name, hash, ...)?
> 
> If so, "unable to download" is a bit confusing, I wasted some
> time looking at the file system and the file:// protocol, and
> annoyed the sysadmins... May I suggest to replace the message
> with "wrong version" or something?
> 
> Chris
> 
> 
> dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
> MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl
> 
> MARIN news: 
> http://www.marin.nl/web/News/News-items/Workshop-Optimaliseren-is-ook-innoveren-15-november.htm
> 
> 



Re: [petsc-users] SuperLU_dist issue in 3.7.4

2016-10-25 Thread Anton Popov

Hong,

I get all the problems gone and valgrind-clean output if I specify this:

-mat_superlu_dist_fact SamePattern_SameRowPerm

What does SamePattern_SameRowPerm actually mean?
Row permutations are for large diagonal, column permutations are for 
sparsity, right?
Will it skip subsequent matrix permutations for large diagonal even if 
matrix values change significantly?


Surprisingly everything works even with:

-mat_superlu_dist_colperm PARMETIS
-mat_superlu_dist_parsymbfact TRUE

Thanks,
Anton

On 10/24/2016 09:06 PM, Hong wrote:

Anton:


If replacing superlu_dist with mumps, does your code work?

yes

You may use mumps in your code, or tests different options for 
superlu_dist:


  -mat_superlu_dist_equil:  Equilibrate matrix (None)
  -mat_superlu_dist_rowperm  Row permutation (choose one 
of) LargeDiag NATURAL (None)
  -mat_superlu_dist_colperm  Column permutation 
(choose one of) NATURAL MMD_AT_PLUS_A MMD_ATA METIS_AT_PLUS_A PARMETIS 
(None)

  -mat_superlu_dist_replacetinypivot:  Replace tiny pivots (None)
  -mat_superlu_dist_parsymbfact:  Parallel symbolic 
factorization (None)
  -mat_superlu_dist_fact  Sparsity pattern for repeated 
matrix factorization (choose one of) SamePattern 
SamePattern_SameRowPerm (None)


The options inside <> are defaults. You may try others. This might 
help narrow down the bug.


Hong



Hong

On 10/24/2016 05:47 PM, Hong wrote:

Barry,
Your change indeed fixed the error of his testing code.
As Satish tested, on your branch, ex16 runs smooth.

I do not understand why on maint or master branch, ex16
creases inside superlu_dist, but not with mumps.



I also confirm that ex16 runs fine with latest fix, but
unfortunately not my code.

This is something to be expected, since my code preallocates
once in the beginning. So there is no way it can be affected
by multiple preallocations. Subsequently I only do matrix
assembly, that makes sure structure doesn't change (set to
get error otherwise).

Summary: we don't have a simple test code to debug superlu
issue anymore.

Anton


Hong

On Mon, Oct 24, 2016 at 9:34 AM, Satish Balay
> wrote:

On Mon, 24 Oct 2016, Barry Smith wrote:

>
> > [Or perhaps Hong is using a different test code and is
observing bugs
> > with superlu_dist interface..]
>
>She states that her test does a NEW MatCreate() for
each matrix load (I cut and pasted it in the email I
just sent). The bug I fixed was only related to using
the SAME matrix from one MatLoad() in another MatLoad().

Ah - ok.. Sorry - wasn't thinking clearly :(

Satish












Re: [petsc-users] Element to local dof map using dmplex

2016-10-25 Thread Matthew Knepley
On Tue, Oct 25, 2016 at 6:39 AM, Morten Nobel-Jørgensen  wrote:

> Dear Matt
>
> Did you (or anyone else) find time to look at our issue?
>
> We are really looking forward to your answer :)
>

Yes, I had a little difficulty understanding what was going on, but now I
think I see. I am
attaching my modified ex19.cc. Please look at the sections marked with
'MGK'. The largest
change is that I think you can dispense with your matrix data structure,
and just call
DMPlexVecGetValuesClosure (for coordinates) and DMPlexMatSetValuesClosure
(for element matrices).
I did not understand what you needed to modify for ExodusII.

  Thanks,

Matt


> Kind regards,
> Morten
> --
> *From:* Matthew Knepley [knep...@gmail.com]
> *Sent:* Wednesday, October 12, 2016 3:41 PM
> *To:* Morten Nobel-Jørgensen
> *Cc:* petsc-users@mcs.anl.gov
> *Subject:* Re: Element to local dof map using dmplex
>
> On Wed, Oct 12, 2016 at 6:40 AM, Morten Nobel-Jørgensen 
> wrote:
>
>> Dear PETSc developers / Matt
>>
>> Thanks for your suggestions regarding our use of dmplex in a FEM context.
>> However, Matt's advise on  using the PetscFE is not sufficient for our
>> needs (our end goal is a topology optimization framework - not just FEM)
>> and we must honestly admit that we do not see how we can use the MATIS and
>> the MatSetValuesClosure or DMPlexMatSetClosure to solve our current issues
>> as Stefano has suggested.
>>
>> We have therefore created a more representative, yet heavily
>> oversimplified, code example that demonstrates our problem. That is, the
>> dof handling is only correct on a single process and goes wrong on np>1.
>>
>> We hope very much that you can help us to overcome our problem.
>>
>
> Okay, I will look at it and try to rework it to fix your problem.
>
> I am in London this week, so it might take me until next week.
>
>   Thanks,
>
>  Matt
>
>
>> Thank you for an excellent toolkit
>> Morten and Niels
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


ex19.cc
Description: Binary data


Re: [petsc-users] SuperLU_dist issue in 3.7.4 failure of repeated calls to MatLoad() or MatMPIAIJSetPreallocation() with the same matrix

2016-10-25 Thread Anton Popov



On 10/25/2016 01:58 PM, Anton Popov wrote:



On 10/24/2016 10:32 PM, Barry Smith wrote:

Valgrind doesn't report any problems?



Valgrind hangs and never returns (waited hours for a 5 sec run) after 
entering factorization for the second time.


Before it happens it prints this (attached)

Anton






On Oct 24, 2016, at 12:09 PM, Anton Popov  wrote:



On 10/24/2016 05:47 PM, Hong wrote:

Barry,
Your change indeed fixed the error of his testing code.
As Satish tested, on your branch, ex16 runs smooth.

I do not understand why on maint or master branch, ex16 creases 
inside superlu_dist, but not with mumps.


I also confirm that ex16 runs fine with latest fix, but 
unfortunately not my code.


This is something to be expected, since my code preallocates once in 
the beginning. So there is no way it can be affected by multiple 
preallocations. Subsequently I only do matrix assembly, that makes 
sure structure doesn't change (set to get error otherwise).


Summary: we don't have a simple test code to debug superlu issue 
anymore.


Anton


Hong

On Mon, Oct 24, 2016 at 9:34 AM, Satish Balay  
wrote:

On Mon, 24 Oct 2016, Barry Smith wrote:

[Or perhaps Hong is using a different test code and is observing 
bugs

with superlu_dist interface..]
She states that her test does a NEW MatCreate() for each 
matrix load (I cut and pasted it in the email I just sent). The 
bug I fixed was only related to using the SAME matrix from one 
MatLoad() in another MatLoad().

Ah - ok.. Sorry - wasn't thinking clearly :(

Satish





USING PICARD JACOBIAN for iteration 0, ||F||/||F0||=1.00e+00
==10744== Use of uninitialised value of size 8
==10744==at 0x18087A8: static_schedule (static_schedule.c:960)
==10744==by 0x17D42AB: pdgstrf (pdgstrf.c:572)
==10744==by 0x17B94B1: pdgssvx (pdgssvx.c:1124)
==10744==by 0xA9E777: MatLUFactorNumeric_SuperLU_DIST (superlu_dist.c:427)
==10744==by 0x6CAA90: MatLUFactorNumeric (matrix.c:3099)
==10744==by 0x137DFE9: PCSetUp_LU (lu.c:139)
==10744==by 0xECC779: PCSetUp (precon.c:968)
==10744==by 0x47AD01: PCStokesUserSetup (lsolve.c:602)
==10744==by 0x476EC3: PCStokesSetup (lsolve.c:173)
==10744==by 0x473BE4: FormJacobian (nlsolve.c:389)
==10744==by 0xF40C3D: SNESComputeJacobian (snes.c:2367)
==10745== Use of uninitialised value of size 8
==10745==at 0x18087A8: static_schedule (static_schedule.c:960)
==10745==by 0x17D42AB: pdgstrf (pdgstrf.c:572)
==10745==by 0x17B94B1: pdgssvx (pdgssvx.c:1124)
==10745==by 0xA9E777: MatLUFactorNumeric_SuperLU_DIST (superlu_dist.c:427)
==10744==by 0xFA5F1F: SNESSolve_KSPONLY (ksponly.c:38)
==10744==
==10745==by 0x6CAA90: MatLUFactorNumeric (matrix.c:3099)
==10745==by 0x137DFE9: PCSetUp_LU (lu.c:139)
==10745==by 0xECC779: PCSetUp (precon.c:968)
==10745==by 0x47AD01: PCStokesUserSetup (lsolve.c:602)
==10745==by 0x476EC3: PCStokesSetup (lsolve.c:173)
==10745==by 0x473BE4: FormJacobian (nlsolve.c:389)
==10745==by 0xF40C3D: SNESComputeJacobian (snes.c:2367)
==10745==by 0xFA5F1F: SNESSolve_KSPONLY (ksponly.c:38)
==10745==
==10745== Invalid write of size 4
==10745==at 0x18087A8: static_schedule (static_schedule.c:960)
==10745==by 0x17D42AB: pdgstrf (pdgstrf.c:572)
==10745==by 0x17B94B1: pdgssvx (pdgssvx.c:1124)
==10745==by 0xA9E777: MatLUFactorNumeric_SuperLU_DIST (superlu_dist.c:427)
==10745==by 0x6CAA90: MatLUFactorNumeric (matrix.c:3099)
==10745==by 0x137DFE9: PCSetUp_LU (lu.c:139)
==10745==by 0xECC779: PCSetUp (precon.c:968)
==10745==by 0x47AD01: PCStokesUserSetup (lsolve.c:602)
==10745==by 0x476EC3: PCStokesSetup (lsolve.c:173)
==10745==by 0x473BE4: FormJacobian (nlsolve.c:389)
==10745==by 0xF40C3D: SNESComputeJacobian (snes.c:2367)
==10745==by 0xFA5F1F: SNESSolve_KSPONLY (ksponly.c:38)
==10745==  Address 0xa077c48 is 200 bytes inside a block of size 13,936 free'd
==10745==at 0x4C2EDEB: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==10745==by 0x17A8A16: superlu_free_dist (memory.c:124)
==10745==by 0x18086A2: static_schedule (static_schedule.c:946)
==10745==by 0x17D42AB: pdgstrf (pdgstrf.c:572)
==10745==by 0x17B94B1: pdgssvx (pdgssvx.c:1124)
==10745==by 0xA9E777: MatLUFactorNumeric_SuperLU_DIST (superlu_dist.c:427)
==10745==by 0x6CAA90: MatLUFactorNumeric (matrix.c:3099)
==10745==by 0x137DFE9: PCSetUp_LU (lu.c:139)
==10745==by 0xECC779: PCSetUp (precon.c:968)
==10745==by 0x47AD01: PCStokesUserSetup (lsolve.c:602)
==10745==by 0x476EC3: PCStokesSetup (lsolve.c:173)
==10745==by 0x473BE4: FormJacobian (nlsolve.c:389)
==10745==  Block was alloc'd at
==10745==at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==10745==by 0x17A89F4: superlu_malloc_dist (memory.c:118)
==10745==by 0x18051F5: static_schedule (static_schedule.c:274)
==10745==by 0x17D42AB: pdgstrf (pdgstrf.c:572)

Re: [petsc-users] SuperLU_dist issue in 3.7.4 failure of repeated calls to MatLoad() or MatMPIAIJSetPreallocation() with the same matrix

2016-10-25 Thread Anton Popov



On 10/24/2016 10:32 PM, Barry Smith wrote:

Valgrind doesn't report any problems?



Valgrind hangs and never returns (waited hours for a 5 sec run) after 
entering factorization for the second time.



On Oct 24, 2016, at 12:09 PM, Anton Popov  wrote:



On 10/24/2016 05:47 PM, Hong wrote:

Barry,
Your change indeed fixed the error of his testing code.
As Satish tested, on your branch, ex16 runs smooth.

I do not understand why on maint or master branch, ex16 creases inside 
superlu_dist, but not with mumps.


I also confirm that ex16 runs fine with latest fix, but unfortunately not my 
code.

This is something to be expected, since my code preallocates once in the 
beginning. So there is no way it can be affected by multiple preallocations. 
Subsequently I only do matrix assembly, that makes sure structure doesn't 
change (set to get error otherwise).

Summary: we don't have a simple test code to debug superlu issue anymore.

Anton


Hong

On Mon, Oct 24, 2016 at 9:34 AM, Satish Balay  wrote:
On Mon, 24 Oct 2016, Barry Smith wrote:


[Or perhaps Hong is using a different test code and is observing bugs
with superlu_dist interface..]

She states that her test does a NEW MatCreate() for each matrix load (I cut 
and pasted it in the email I just sent). The bug I fixed was only related to 
using the SAME matrix from one MatLoad() in another MatLoad().

Ah - ok.. Sorry - wasn't thinking clearly :(

Satish





Re: [petsc-users] Element to local dof map using dmplex

2016-10-25 Thread Morten Nobel-Jørgensen
Dear Matt

Did you (or anyone else) find time to look at our issue?

We are really looking forward to your answer :)

Kind regards,
Morten

From: Matthew Knepley [knep...@gmail.com]
Sent: Wednesday, October 12, 2016 3:41 PM
To: Morten Nobel-Jørgensen
Cc: petsc-users@mcs.anl.gov
Subject: Re: Element to local dof map using dmplex

On Wed, Oct 12, 2016 at 6:40 AM, Morten Nobel-Jørgensen 
> wrote:
Dear PETSc developers / Matt

Thanks for your suggestions regarding our use of dmplex in a FEM context. 
However, Matt's advise on  using the PetscFE is not sufficient for our needs 
(our end goal is a topology optimization framework - not just FEM) and we must 
honestly admit that we do not see how we can use the MATIS and the 
MatSetValuesClosure or DMPlexMatSetClosure to solve our current issues as 
Stefano has suggested.

We have therefore created a more representative, yet heavily oversimplified, 
code example that demonstrates our problem. That is, the dof handling is only 
correct on a single process and goes wrong on np>1.

We hope very much that you can help us to overcome our problem.

Okay, I will look at it and try to rework it to fix your problem.

I am in London this week, so it might take me until next week.

  Thanks,

 Matt

Thank you for an excellent toolkit
Morten and Niels



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener


[petsc-users] error with wrong tarball in path/to/package

2016-10-25 Thread Klaij, Christiaan

Here is a small complaint about the error message "unable to
download" that is given when using
--download-PACKAGENAME=/PATH/TO/package.tar.gz with the wrong
tarball. For example, for my previous install, I was using
petsc-3.5.3 with:

--download-ml=/path/to/ml-6.2-win.tar.gz

Using the same file with 3.7.4 gives this error message

***
 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):
---
Unable to download ML
Failed to download ML
***

My guess from ml.py is that I should now download:

https://bitbucket.org/petsc/pkg-ml/get/v6.2-p4.tar.gz

and that you are somehow checking that the file specified in the
path matches this file (name, hash, ...)?

If so, "unable to download" is a bit confusing, I wasted some
time looking at the file system and the file:// protocol, and
annoyed the sysadmins... May I suggest to replace the message
with "wrong version" or something?

Chris


dr. ir. Christiaan Klaij  | CFD Researcher | Research & Development
MARIN | T +31 317 49 33 44 | mailto:c.kl...@marin.nl | http://www.marin.nl

MARIN news: 
http://www.marin.nl/web/News/News-items/Workshop-Optimaliseren-is-ook-innoveren-15-november.htm



Re: [petsc-users] BVNormColumn

2016-10-25 Thread Jose E. Roman

> El 19 oct 2016, a las 9:54, Jose E. Roman  escribió:
> 
>> 
>> El 19 oct 2016, a las 0:26, Bikash Kanungo  escribió:
>> 
>> Hi Jose,
>> 
>> Thanks for the pointers. Here's what I observed on probing it further:
>> 
>>  • The ||B - B^H|| norm was 1e-18. So I explicitly made it Hermitian by 
>> setting B = 0.5(B+B^H). However, this didn't help.
>>  • Next, I checked for the conditioning of B by computing the ratio of 
>> the highest and lowest eigenvalues. The conditioning of the order 1e-9. 
>>  • I monitored the imaginary the imaginary part of VecDot(y,x, dotXY) 
>> where y = B*x and noted that only when the imaginary part is more than 1e-16 
>> in magnitude, the error of "The inner product is not well defined" is 
>> flagged. For the first few iterations of orhtogonalization (i.e., the one 
>> where orthogonization is successful), the values of VecDot(y,x, dotXY) are 
>> all found to be lower than 1e-16. I guess this small imaginary part might be 
>> the cause of the error. 
>> Let me know if there is a way to bypass the abort by changing the tolerance 
>> for imaginary part.
>> 
>> 
>> 
>> Regards,
>> Bikash
>> 
> 
> There is something wrong: the condition number is greater than 1 by 
> definition, so it cannot be 1e-9. Anyway, maybe what happens is that your 
> matrix has a very small norm. The SLEPc code needs a fix for the case when 
> the norm of B or the norm of the vector x is very small. Please send the 
> matrix to my personal email and I will make some tests.
> 
> Jose

I tested with your matrix and vector with two different machines, with 
different compilers, and in both cases the computation did not fail. The 
imaginary part is below the machine precision, as expected. I don't know why 
you are getting larger roundoff error. Anyway, the check that we currently have 
in SLEPc is too strict. You can try relaxing it, by editing function 
BV_SafeSqrt (in $SLEPC_DIR/include/slepc/private/bvimpl.h), for instance with 
this:

  if (PetscAbsReal(PetscImaginaryPart(alpha))>PETSC_MACHINE_EPSILON && 
PetscAbsReal(PetscImaginaryPart(alpha))/absal>100*PETSC_MACHINE_EPSILON) 
SETERRQ1(PetscObjectComm((PetscObject)bv),1,"The inner product is not well 
defined: nonzero imaginary part %g",PetscImaginaryPart(alpha));

Let us know if this works for you.
Thanks.
Jose