Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread Hong
David :
I did following:

PC  pc;
Mat F;
ierr = KSPGetPC(ksp,);CHKERRQ(ierr);
ierr = PCReset(pc);CHKERRQ(ierr);
ierr = KSPSetOperators(ksp,A,A);CHKERRQ(ierr);
ierr = PCSetType(pc,PCCHOLESKY);CHKERRQ(ierr);

ierr = PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);CHKERRQ(ierr);
ierr = PCFactorSetUpMatSolverPackage(pc);CHKERRQ(ierr);
ierr = PCFactorGetMatrix(pc,);CHKERRQ(ierr);
ierr = MatMumpsSetIcntl(F,14,30);CHKERRQ(ierr);

ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr);

Then it resolves the matrix equation with ICNTL(14)=30.
Attached is modified petsc/src/ksp/ksp/examples/tutorials/ex10.c.
Using in with your matrix.dat, I get

mpiexec -n 4 ./ex10 -f0 matrix.dat -rhs 0 -ksp_reason
Number of iterations =   0
KSPConvergedReason: -11
 Reset PC with ICNTL(14)=30 ...
KSPConvergedReason: 2

Hong

On Mon, Sep 19, 2016 at 9:45 PM, Fande Kong  wrote:
>
>> Placing PCReset(PC pc) before the second kspsolve might works.
>>
>> Fande Kong,
>>
>> On Mon, Sep 19, 2016 at 7:38 PM, murat keçeli  wrote:
>>
>>> Another guess: maybe you also need KSPSetUp(ksp); before the second
>>> KSPSolve(ksp,b,x);.
>>>
>>> Murat Keceli
>>>
>>
> Thanks for the suggestions. I just tried these, and they didn't work
> either unfortunately.
>
> David
>
>
>
>
>
>> ​
>>>
>>> On Mon, Sep 19, 2016 at 8:33 PM, David Knezevic <
>>> david.kneze...@akselos.com> wrote:
>>>
 On Mon, Sep 19, 2016 at 7:26 PM, Dave May 
 wrote:

>
>
> On 19 September 2016 at 21:05, David Knezevic <
> david.kneze...@akselos.com> wrote:
>
>> When I use MUMPS via PETSc, one issue is that it can sometimes fail
>> with MUMPS error -9, which means that MUMPS didn't allocate a big enough
>> workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
>> via the command line option -mat_mumps_icntl_14.
>>
>> However, instead of having to run several times with different
>> command line options, I'd like to be able to automatically increment 
>> icntl
>> 14 value in a loop until the solve succeeds.
>>
>> I have a saved matrix which fails when I use it for a solve with
>> MUMPS with 4 MPI processes and the default ictnl values, so I'm using 
>> this
>> to check that I can achieve the automatic icntl 14 update, as described
>> above. (The matrix is 14MB so I haven't attached it here, but I'd be 
>> happy
>> to send it to anyone else who wants to try this test case out.)
>>
>> I've pasted some test code below which provides a simple test of this
>> idea using two solves. The first solve uses the default value of icntl 
>> 14,
>> which fails, and then we update icntl 14 to 30 and solve again. The 
>> second
>> solve should succeed since icntl 14 of 30 is sufficient for MUMPS to
>> succeed in this case, but for some reason the second solve still fails.
>>
>> Below I've also pasted the output from -ksp_view, and you can see
>> that ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
>> output), so it's not clear to me why the second solve fails. It seems 
>> like
>> MUMPS is ignoring the update to the ictnl value?
>>
>
> I believe this parameter is utilized during the numerical
> factorization phase.
> In your code, the operator hasn't changed, however you haven't
> signalled to the KSP that you want to re-perform the numerical
> factorization.
> You can do this by calling KSPSetOperators() before your second solve.
> I think if you do this (please try it), the factorization will be
> performed again and the new value of icntl will have an effect.
>
> Note this is a wild stab in the dark - I haven't dug through the
> petsc-mumps code in detail...
>

 That sounds like a plausible guess to me, but unfortunately it didn't
 work. I added KSPSetOperators(ksp,A,A); before the second solve and I
 got the same behavior as before.

 Thanks,
 David





> 
>> -
>> Test code:
>>
>>   Mat A;
>>   MatCreate(PETSC_COMM_WORLD,);
>>   MatSetType(A,MATMPIAIJ);
>>
>>   PetscViewer petsc_viewer;
>>   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
>>  "matrix.dat",
>>  FILE_MODE_READ,
>>  _viewer);
>>   MatLoad(A, petsc_viewer);
>>   PetscViewerDestroy(_viewer);
>>
>>   PetscInt m, n;
>>   MatGetSize(A, , );
>>
>>   Vec x;
>>   VecCreate(PETSC_COMM_WORLD,);
>>   VecSetSizes(x,PETSC_DECIDE,m);
>>   VecSetFromOptions(x);
>>   VecSet(x,1.0);
>>
>>   Vec b;
>>   VecDuplicate(x,);
>>
>>   KSP ksp;
>>   PC 

Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread David Knezevic
On Mon, Sep 19, 2016 at 9:45 PM, Fande Kong  wrote:

> Placing PCReset(PC pc) before the second kspsolve might works.
>
> Fande Kong,
>
> On Mon, Sep 19, 2016 at 7:38 PM, murat keçeli  wrote:
>
>> Another guess: maybe you also need KSPSetUp(ksp); before the second
>> KSPSolve(ksp,b,x);.
>>
>> Murat Keceli
>>
>
Thanks for the suggestions. I just tried these, and they didn't work either
unfortunately.

David





> ​
>>
>> On Mon, Sep 19, 2016 at 8:33 PM, David Knezevic <
>> david.kneze...@akselos.com> wrote:
>>
>>> On Mon, Sep 19, 2016 at 7:26 PM, Dave May 
>>> wrote:
>>>


 On 19 September 2016 at 21:05, David Knezevic <
 david.kneze...@akselos.com> wrote:

> When I use MUMPS via PETSc, one issue is that it can sometimes fail
> with MUMPS error -9, which means that MUMPS didn't allocate a big enough
> workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
> via the command line option -mat_mumps_icntl_14.
>
> However, instead of having to run several times with different command
> line options, I'd like to be able to automatically increment icntl 14 
> value
> in a loop until the solve succeeds.
>
> I have a saved matrix which fails when I use it for a solve with MUMPS
> with 4 MPI processes and the default ictnl values, so I'm using this to
> check that I can achieve the automatic icntl 14 update, as described 
> above.
> (The matrix is 14MB so I haven't attached it here, but I'd be happy to 
> send
> it to anyone else who wants to try this test case out.)
>
> I've pasted some test code below which provides a simple test of this
> idea using two solves. The first solve uses the default value of icntl 14,
> which fails, and then we update icntl 14 to 30 and solve again. The second
> solve should succeed since icntl 14 of 30 is sufficient for MUMPS to
> succeed in this case, but for some reason the second solve still fails.
>
> Below I've also pasted the output from -ksp_view, and you can see that
> ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
> output), so it's not clear to me why the second solve fails. It seems like
> MUMPS is ignoring the update to the ictnl value?
>

 I believe this parameter is utilized during the numerical factorization
 phase.
 In your code, the operator hasn't changed, however you haven't
 signalled to the KSP that you want to re-perform the numerical
 factorization.
 You can do this by calling KSPSetOperators() before your second solve.
 I think if you do this (please try it), the factorization will be
 performed again and the new value of icntl will have an effect.

 Note this is a wild stab in the dark - I haven't dug through the
 petsc-mumps code in detail...

>>>
>>> That sounds like a plausible guess to me, but unfortunately it didn't
>>> work. I added KSPSetOperators(ksp,A,A); before the second solve and I
>>> got the same behavior as before.
>>>
>>> Thanks,
>>> David
>>>
>>>
>>>
>>>
>>>
 
> -
> Test code:
>
>   Mat A;
>   MatCreate(PETSC_COMM_WORLD,);
>   MatSetType(A,MATMPIAIJ);
>
>   PetscViewer petsc_viewer;
>   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
>  "matrix.dat",
>  FILE_MODE_READ,
>  _viewer);
>   MatLoad(A, petsc_viewer);
>   PetscViewerDestroy(_viewer);
>
>   PetscInt m, n;
>   MatGetSize(A, , );
>
>   Vec x;
>   VecCreate(PETSC_COMM_WORLD,);
>   VecSetSizes(x,PETSC_DECIDE,m);
>   VecSetFromOptions(x);
>   VecSet(x,1.0);
>
>   Vec b;
>   VecDuplicate(x,);
>
>   KSP ksp;
>   PC pc;
>
>   KSPCreate(PETSC_COMM_WORLD,);
>   KSPSetOperators(ksp,A,A);
>
>   KSPSetType(ksp,KSPPREONLY);
>   KSPGetPC(ksp,);
>
>   PCSetType(pc,PCCHOLESKY);
>
>   PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
>   PCFactorSetUpMatSolverPackage(pc);
>
>   KSPSetFromOptions(ksp);
>   KSPSetUp(ksp);
>
>   KSPSolve(ksp,b,x);
>
>   {
> KSPConvergedReason reason;
> KSPGetConvergedReason(ksp, );
> std::cout << "converged reason: " << reason << std::endl;
>   }
>
>   Mat F;
>   PCFactorGetMatrix(pc,);
>   MatMumpsSetIcntl(F,14,30);
>
>   KSPSolve(ksp,b,x);
>
>   {
> KSPConvergedReason reason;
> KSPGetConvergedReason(ksp, );
> std::cout << "converged reason: " << reason << std::endl;
>   }
>
> 
> -
> -ksp_view output (ICNTL(14) changes from 20 to 30, 

Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread Fande Kong
Placing PCReset(PC pc) before the second kspsolve might works.

Fande Kong,

On Mon, Sep 19, 2016 at 7:38 PM, murat keçeli  wrote:

> Another guess: maybe you also need KSPSetUp(ksp); before the second
> KSPSolve(ksp,b,x);.
>
> Murat Keceli
> ​
>
> On Mon, Sep 19, 2016 at 8:33 PM, David Knezevic <
> david.kneze...@akselos.com> wrote:
>
>> On Mon, Sep 19, 2016 at 7:26 PM, Dave May 
>> wrote:
>>
>>>
>>>
>>> On 19 September 2016 at 21:05, David Knezevic <
>>> david.kneze...@akselos.com> wrote:
>>>
 When I use MUMPS via PETSc, one issue is that it can sometimes fail
 with MUMPS error -9, which means that MUMPS didn't allocate a big enough
 workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
 via the command line option -mat_mumps_icntl_14.

 However, instead of having to run several times with different command
 line options, I'd like to be able to automatically increment icntl 14 value
 in a loop until the solve succeeds.

 I have a saved matrix which fails when I use it for a solve with MUMPS
 with 4 MPI processes and the default ictnl values, so I'm using this to
 check that I can achieve the automatic icntl 14 update, as described above.
 (The matrix is 14MB so I haven't attached it here, but I'd be happy to send
 it to anyone else who wants to try this test case out.)

 I've pasted some test code below which provides a simple test of this
 idea using two solves. The first solve uses the default value of icntl 14,
 which fails, and then we update icntl 14 to 30 and solve again. The second
 solve should succeed since icntl 14 of 30 is sufficient for MUMPS to
 succeed in this case, but for some reason the second solve still fails.

 Below I've also pasted the output from -ksp_view, and you can see that
 ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
 output), so it's not clear to me why the second solve fails. It seems like
 MUMPS is ignoring the update to the ictnl value?

>>>
>>> I believe this parameter is utilized during the numerical factorization
>>> phase.
>>> In your code, the operator hasn't changed, however you haven't signalled
>>> to the KSP that you want to re-perform the numerical factorization.
>>> You can do this by calling KSPSetOperators() before your second solve.
>>> I think if you do this (please try it), the factorization will be
>>> performed again and the new value of icntl will have an effect.
>>>
>>> Note this is a wild stab in the dark - I haven't dug through the
>>> petsc-mumps code in detail...
>>>
>>
>> That sounds like a plausible guess to me, but unfortunately it didn't
>> work. I added KSPSetOperators(ksp,A,A); before the second solve and I
>> got the same behavior as before.
>>
>> Thanks,
>> David
>>
>>
>>
>>
>>
>>> 
 -
 Test code:

   Mat A;
   MatCreate(PETSC_COMM_WORLD,);
   MatSetType(A,MATMPIAIJ);

   PetscViewer petsc_viewer;
   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
  "matrix.dat",
  FILE_MODE_READ,
  _viewer);
   MatLoad(A, petsc_viewer);
   PetscViewerDestroy(_viewer);

   PetscInt m, n;
   MatGetSize(A, , );

   Vec x;
   VecCreate(PETSC_COMM_WORLD,);
   VecSetSizes(x,PETSC_DECIDE,m);
   VecSetFromOptions(x);
   VecSet(x,1.0);

   Vec b;
   VecDuplicate(x,);

   KSP ksp;
   PC pc;

   KSPCreate(PETSC_COMM_WORLD,);
   KSPSetOperators(ksp,A,A);

   KSPSetType(ksp,KSPPREONLY);
   KSPGetPC(ksp,);

   PCSetType(pc,PCCHOLESKY);

   PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
   PCFactorSetUpMatSolverPackage(pc);

   KSPSetFromOptions(ksp);
   KSPSetUp(ksp);

   KSPSolve(ksp,b,x);

   {
 KSPConvergedReason reason;
 KSPGetConvergedReason(ksp, );
 std::cout << "converged reason: " << reason << std::endl;
   }

   Mat F;
   PCFactorGetMatrix(pc,);
   MatMumpsSetIcntl(F,14,30);

   KSPSolve(ksp,b,x);

   {
 KSPConvergedReason reason;
 KSPGetConvergedReason(ksp, );
 std::cout << "converged reason: " << reason << std::endl;
   }

 
 -
 -ksp_view output (ICNTL(14) changes from 20 to 30, but we get
 "converged reason: -11" for both solves)

 KSP Object: 4 MPI processes
   type: preonly
   maximum iterations=1, initial guess is zero
   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
   left preconditioning
   using NONE norm type for convergence test
 PC Object: 4 MPI processes

Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread murat keçeli
Another guess: maybe you also need KSPSetUp(ksp); before the second
KSPSolve(ksp,b,x);.

Murat Keceli
​

On Mon, Sep 19, 2016 at 8:33 PM, David Knezevic 
wrote:

> On Mon, Sep 19, 2016 at 7:26 PM, Dave May  wrote:
>
>>
>>
>> On 19 September 2016 at 21:05, David Knezevic > > wrote:
>>
>>> When I use MUMPS via PETSc, one issue is that it can sometimes fail with
>>> MUMPS error -9, which means that MUMPS didn't allocate a big enough
>>> workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
>>> via the command line option -mat_mumps_icntl_14.
>>>
>>> However, instead of having to run several times with different command
>>> line options, I'd like to be able to automatically increment icntl 14 value
>>> in a loop until the solve succeeds.
>>>
>>> I have a saved matrix which fails when I use it for a solve with MUMPS
>>> with 4 MPI processes and the default ictnl values, so I'm using this to
>>> check that I can achieve the automatic icntl 14 update, as described above.
>>> (The matrix is 14MB so I haven't attached it here, but I'd be happy to send
>>> it to anyone else who wants to try this test case out.)
>>>
>>> I've pasted some test code below which provides a simple test of this
>>> idea using two solves. The first solve uses the default value of icntl 14,
>>> which fails, and then we update icntl 14 to 30 and solve again. The second
>>> solve should succeed since icntl 14 of 30 is sufficient for MUMPS to
>>> succeed in this case, but for some reason the second solve still fails.
>>>
>>> Below I've also pasted the output from -ksp_view, and you can see that
>>> ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
>>> output), so it's not clear to me why the second solve fails. It seems like
>>> MUMPS is ignoring the update to the ictnl value?
>>>
>>
>> I believe this parameter is utilized during the numerical factorization
>> phase.
>> In your code, the operator hasn't changed, however you haven't signalled
>> to the KSP that you want to re-perform the numerical factorization.
>> You can do this by calling KSPSetOperators() before your second solve.
>> I think if you do this (please try it), the factorization will be
>> performed again and the new value of icntl will have an effect.
>>
>> Note this is a wild stab in the dark - I haven't dug through the
>> petsc-mumps code in detail...
>>
>
> That sounds like a plausible guess to me, but unfortunately it didn't
> work. I added KSPSetOperators(ksp,A,A); before the second solve and I got
> the same behavior as before.
>
> Thanks,
> David
>
>
>
>
>
>> 
>>> -
>>> Test code:
>>>
>>>   Mat A;
>>>   MatCreate(PETSC_COMM_WORLD,);
>>>   MatSetType(A,MATMPIAIJ);
>>>
>>>   PetscViewer petsc_viewer;
>>>   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
>>>  "matrix.dat",
>>>  FILE_MODE_READ,
>>>  _viewer);
>>>   MatLoad(A, petsc_viewer);
>>>   PetscViewerDestroy(_viewer);
>>>
>>>   PetscInt m, n;
>>>   MatGetSize(A, , );
>>>
>>>   Vec x;
>>>   VecCreate(PETSC_COMM_WORLD,);
>>>   VecSetSizes(x,PETSC_DECIDE,m);
>>>   VecSetFromOptions(x);
>>>   VecSet(x,1.0);
>>>
>>>   Vec b;
>>>   VecDuplicate(x,);
>>>
>>>   KSP ksp;
>>>   PC pc;
>>>
>>>   KSPCreate(PETSC_COMM_WORLD,);
>>>   KSPSetOperators(ksp,A,A);
>>>
>>>   KSPSetType(ksp,KSPPREONLY);
>>>   KSPGetPC(ksp,);
>>>
>>>   PCSetType(pc,PCCHOLESKY);
>>>
>>>   PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
>>>   PCFactorSetUpMatSolverPackage(pc);
>>>
>>>   KSPSetFromOptions(ksp);
>>>   KSPSetUp(ksp);
>>>
>>>   KSPSolve(ksp,b,x);
>>>
>>>   {
>>> KSPConvergedReason reason;
>>> KSPGetConvergedReason(ksp, );
>>> std::cout << "converged reason: " << reason << std::endl;
>>>   }
>>>
>>>   Mat F;
>>>   PCFactorGetMatrix(pc,);
>>>   MatMumpsSetIcntl(F,14,30);
>>>
>>>   KSPSolve(ksp,b,x);
>>>
>>>   {
>>> KSPConvergedReason reason;
>>> KSPGetConvergedReason(ksp, );
>>> std::cout << "converged reason: " << reason << std::endl;
>>>   }
>>>
>>> 
>>> -
>>> -ksp_view output (ICNTL(14) changes from 20 to 30, but we get "converged
>>> reason: -11" for both solves)
>>>
>>> KSP Object: 4 MPI processes
>>>   type: preonly
>>>   maximum iterations=1, initial guess is zero
>>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>>>   left preconditioning
>>>   using NONE norm type for convergence test
>>> PC Object: 4 MPI processes
>>>   type: cholesky
>>> Cholesky: out-of-place factorization
>>> tolerance for zero pivot 2.22045e-14
>>> matrix ordering: natural
>>> factor fill ratio given 0., needed 0.
>>>   Factored matrix follows:
>>> Mat Object: 4 MPI processes
>>>   type: mpiaij
>>>   

Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread David Knezevic
On Mon, Sep 19, 2016 at 7:26 PM, Dave May  wrote:

>
>
> On 19 September 2016 at 21:05, David Knezevic 
> wrote:
>
>> When I use MUMPS via PETSc, one issue is that it can sometimes fail with
>> MUMPS error -9, which means that MUMPS didn't allocate a big enough
>> workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
>> via the command line option -mat_mumps_icntl_14.
>>
>> However, instead of having to run several times with different command
>> line options, I'd like to be able to automatically increment icntl 14 value
>> in a loop until the solve succeeds.
>>
>> I have a saved matrix which fails when I use it for a solve with MUMPS
>> with 4 MPI processes and the default ictnl values, so I'm using this to
>> check that I can achieve the automatic icntl 14 update, as described above.
>> (The matrix is 14MB so I haven't attached it here, but I'd be happy to send
>> it to anyone else who wants to try this test case out.)
>>
>> I've pasted some test code below which provides a simple test of this
>> idea using two solves. The first solve uses the default value of icntl 14,
>> which fails, and then we update icntl 14 to 30 and solve again. The second
>> solve should succeed since icntl 14 of 30 is sufficient for MUMPS to
>> succeed in this case, but for some reason the second solve still fails.
>>
>> Below I've also pasted the output from -ksp_view, and you can see that
>> ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
>> output), so it's not clear to me why the second solve fails. It seems like
>> MUMPS is ignoring the update to the ictnl value?
>>
>
> I believe this parameter is utilized during the numerical factorization
> phase.
> In your code, the operator hasn't changed, however you haven't signalled
> to the KSP that you want to re-perform the numerical factorization.
> You can do this by calling KSPSetOperators() before your second solve.
> I think if you do this (please try it), the factorization will be
> performed again and the new value of icntl will have an effect.
>
> Note this is a wild stab in the dark - I haven't dug through the
> petsc-mumps code in detail...
>

That sounds like a plausible guess to me, but unfortunately it didn't work.
I added KSPSetOperators(ksp,A,A); before the second solve and I got the
same behavior as before.

Thanks,
David





> 
>> -
>> Test code:
>>
>>   Mat A;
>>   MatCreate(PETSC_COMM_WORLD,);
>>   MatSetType(A,MATMPIAIJ);
>>
>>   PetscViewer petsc_viewer;
>>   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
>>  "matrix.dat",
>>  FILE_MODE_READ,
>>  _viewer);
>>   MatLoad(A, petsc_viewer);
>>   PetscViewerDestroy(_viewer);
>>
>>   PetscInt m, n;
>>   MatGetSize(A, , );
>>
>>   Vec x;
>>   VecCreate(PETSC_COMM_WORLD,);
>>   VecSetSizes(x,PETSC_DECIDE,m);
>>   VecSetFromOptions(x);
>>   VecSet(x,1.0);
>>
>>   Vec b;
>>   VecDuplicate(x,);
>>
>>   KSP ksp;
>>   PC pc;
>>
>>   KSPCreate(PETSC_COMM_WORLD,);
>>   KSPSetOperators(ksp,A,A);
>>
>>   KSPSetType(ksp,KSPPREONLY);
>>   KSPGetPC(ksp,);
>>
>>   PCSetType(pc,PCCHOLESKY);
>>
>>   PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
>>   PCFactorSetUpMatSolverPackage(pc);
>>
>>   KSPSetFromOptions(ksp);
>>   KSPSetUp(ksp);
>>
>>   KSPSolve(ksp,b,x);
>>
>>   {
>> KSPConvergedReason reason;
>> KSPGetConvergedReason(ksp, );
>> std::cout << "converged reason: " << reason << std::endl;
>>   }
>>
>>   Mat F;
>>   PCFactorGetMatrix(pc,);
>>   MatMumpsSetIcntl(F,14,30);
>>
>>   KSPSolve(ksp,b,x);
>>
>>   {
>> KSPConvergedReason reason;
>> KSPGetConvergedReason(ksp, );
>> std::cout << "converged reason: " << reason << std::endl;
>>   }
>>
>> 
>> -
>> -ksp_view output (ICNTL(14) changes from 20 to 30, but we get "converged
>> reason: -11" for both solves)
>>
>> KSP Object: 4 MPI processes
>>   type: preonly
>>   maximum iterations=1, initial guess is zero
>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>>   left preconditioning
>>   using NONE norm type for convergence test
>> PC Object: 4 MPI processes
>>   type: cholesky
>> Cholesky: out-of-place factorization
>> tolerance for zero pivot 2.22045e-14
>> matrix ordering: natural
>> factor fill ratio given 0., needed 0.
>>   Factored matrix follows:
>> Mat Object: 4 MPI processes
>>   type: mpiaij
>>   rows=22878, cols=22878
>>   package used to perform factorization: mumps
>>   total: nonzeros=3361617, allocated nonzeros=3361617
>>   total number of mallocs used during MatSetValues calls =0
>> MUMPS run parameters:
>>   SYM (matrix type):   2
>>   PAR 

Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread Dave May
On 19 September 2016 at 21:05, David Knezevic 
wrote:

> When I use MUMPS via PETSc, one issue is that it can sometimes fail with
> MUMPS error -9, which means that MUMPS didn't allocate a big enough
> workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
> via the command line option -mat_mumps_icntl_14.
>
> However, instead of having to run several times with different command
> line options, I'd like to be able to automatically increment icntl 14 value
> in a loop until the solve succeeds.
>
> I have a saved matrix which fails when I use it for a solve with MUMPS
> with 4 MPI processes and the default ictnl values, so I'm using this to
> check that I can achieve the automatic icntl 14 update, as described above.
> (The matrix is 14MB so I haven't attached it here, but I'd be happy to send
> it to anyone else who wants to try this test case out.)
>
> I've pasted some test code below which provides a simple test of this idea
> using two solves. The first solve uses the default value of icntl 14, which
> fails, and then we update icntl 14 to 30 and solve again. The second solve
> should succeed since icntl 14 of 30 is sufficient for MUMPS to succeed in
> this case, but for some reason the second solve still fails.
>
> Below I've also pasted the output from -ksp_view, and you can see that
> ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
> output), so it's not clear to me why the second solve fails. It seems like
> MUMPS is ignoring the update to the ictnl value?
>

I believe this parameter is utilized during the numerical factorization
phase.
In your code, the operator hasn't changed, however you haven't signalled to
the KSP that you want to re-perform the numerical factorization.
You can do this by calling KSPSetOperators() before your second solve.
I think if you do this (please try it), the factorization will be performed
again and the new value of icntl will have an effect.

Note this is a wild stab in the dark - I haven't dug through the
petsc-mumps code in detail...

Thanks,
  Dave


>
>
> Thanks,
> David
>
> 
> -
> Test code:
>
>   Mat A;
>   MatCreate(PETSC_COMM_WORLD,);
>   MatSetType(A,MATMPIAIJ);
>
>   PetscViewer petsc_viewer;
>   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
>  "matrix.dat",
>  FILE_MODE_READ,
>  _viewer);
>   MatLoad(A, petsc_viewer);
>   PetscViewerDestroy(_viewer);
>
>   PetscInt m, n;
>   MatGetSize(A, , );
>
>   Vec x;
>   VecCreate(PETSC_COMM_WORLD,);
>   VecSetSizes(x,PETSC_DECIDE,m);
>   VecSetFromOptions(x);
>   VecSet(x,1.0);
>
>   Vec b;
>   VecDuplicate(x,);
>
>   KSP ksp;
>   PC pc;
>
>   KSPCreate(PETSC_COMM_WORLD,);
>   KSPSetOperators(ksp,A,A);
>
>   KSPSetType(ksp,KSPPREONLY);
>   KSPGetPC(ksp,);
>
>   PCSetType(pc,PCCHOLESKY);
>
>   PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
>   PCFactorSetUpMatSolverPackage(pc);
>
>   KSPSetFromOptions(ksp);
>   KSPSetUp(ksp);
>
>   KSPSolve(ksp,b,x);
>
>   {
> KSPConvergedReason reason;
> KSPGetConvergedReason(ksp, );
> std::cout << "converged reason: " << reason << std::endl;
>   }
>
>   Mat F;
>   PCFactorGetMatrix(pc,);
>   MatMumpsSetIcntl(F,14,30);
>
>   KSPSolve(ksp,b,x);
>
>   {
> KSPConvergedReason reason;
> KSPGetConvergedReason(ksp, );
> std::cout << "converged reason: " << reason << std::endl;
>   }
>
> 
> -
> -ksp_view output (ICNTL(14) changes from 20 to 30, but we get "converged
> reason: -11" for both solves)
>
> KSP Object: 4 MPI processes
>   type: preonly
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>   left preconditioning
>   using NONE norm type for convergence test
> PC Object: 4 MPI processes
>   type: cholesky
> Cholesky: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: natural
> factor fill ratio given 0., needed 0.
>   Factored matrix follows:
> Mat Object: 4 MPI processes
>   type: mpiaij
>   rows=22878, cols=22878
>   package used to perform factorization: mumps
>   total: nonzeros=3361617, allocated nonzeros=3361617
>   total number of mallocs used during MatSetValues calls =0
> MUMPS run parameters:
>   SYM (matrix type):   2
>   PAR (host participation):1
>   ICNTL(1) (output for error): 6
>   ICNTL(2) (output of diagnostic msg): 0
>   ICNTL(3) (output for global info):   0
>   ICNTL(4) (level of printing):0
>   ICNTL(5) (input mat struct): 0
>   ICNTL(6) (matrix prescaling):7
>   

Re: [petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread Hong
David :
I'll check it ...
Hong

When I use MUMPS via PETSc, one issue is that it can sometimes fail with
> MUMPS error -9, which means that MUMPS didn't allocate a big enough
> workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
> via the command line option -mat_mumps_icntl_14.
>
> However, instead of having to run several times with different command
> line options, I'd like to be able to automatically increment icntl 14 value
> in a loop until the solve succeeds.
>
> I have a saved matrix which fails when I use it for a solve with MUMPS
> with 4 MPI processes and the default ictnl values, so I'm using this to
> check that I can achieve the automatic icntl 14 update, as described above.
> (The matrix is 14MB so I haven't attached it here, but I'd be happy to send
> it to anyone else who wants to try this test case out.)
>
> I've pasted some test code below which provides a simple test of this idea
> using two solves. The first solve uses the default value of icntl 14, which
> fails, and then we update icntl 14 to 30 and solve again. The second solve
> should succeed since icntl 14 of 30 is sufficient for MUMPS to succeed in
> this case, but for some reason the second solve still fails.
>
> Below I've also pasted the output from -ksp_view, and you can see that
> ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
> output), so it's not clear to me why the second solve fails. It seems like
> MUMPS is ignoring the update to the ictnl value?
>
> Thanks,
> David
>
> 
> -
> Test code:
>
>   Mat A;
>   MatCreate(PETSC_COMM_WORLD,);
>   MatSetType(A,MATMPIAIJ);
>
>   PetscViewer petsc_viewer;
>   PetscViewerBinaryOpen( PETSC_COMM_WORLD,
>  "matrix.dat",
>  FILE_MODE_READ,
>  _viewer);
>   MatLoad(A, petsc_viewer);
>   PetscViewerDestroy(_viewer);
>
>   PetscInt m, n;
>   MatGetSize(A, , );
>
>   Vec x;
>   VecCreate(PETSC_COMM_WORLD,);
>   VecSetSizes(x,PETSC_DECIDE,m);
>   VecSetFromOptions(x);
>   VecSet(x,1.0);
>
>   Vec b;
>   VecDuplicate(x,);
>
>   KSP ksp;
>   PC pc;
>
>   KSPCreate(PETSC_COMM_WORLD,);
>   KSPSetOperators(ksp,A,A);
>
>   KSPSetType(ksp,KSPPREONLY);
>   KSPGetPC(ksp,);
>
>   PCSetType(pc,PCCHOLESKY);
>
>   PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
>   PCFactorSetUpMatSolverPackage(pc);
>
>   KSPSetFromOptions(ksp);
>   KSPSetUp(ksp);
>
>   KSPSolve(ksp,b,x);
>
>   {
> KSPConvergedReason reason;
> KSPGetConvergedReason(ksp, );
> std::cout << "converged reason: " << reason << std::endl;
>   }
>
>   Mat F;
>   PCFactorGetMatrix(pc,);
>   MatMumpsSetIcntl(F,14,30);
>
>   KSPSolve(ksp,b,x);
>
>   {
> KSPConvergedReason reason;
> KSPGetConvergedReason(ksp, );
> std::cout << "converged reason: " << reason << std::endl;
>   }
>
> 
> -
> -ksp_view output (ICNTL(14) changes from 20 to 30, but we get "converged
> reason: -11" for both solves)
>
> KSP Object: 4 MPI processes
>   type: preonly
>   maximum iterations=1, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>   left preconditioning
>   using NONE norm type for convergence test
> PC Object: 4 MPI processes
>   type: cholesky
> Cholesky: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: natural
> factor fill ratio given 0., needed 0.
>   Factored matrix follows:
> Mat Object: 4 MPI processes
>   type: mpiaij
>   rows=22878, cols=22878
>   package used to perform factorization: mumps
>   total: nonzeros=3361617, allocated nonzeros=3361617
>   total number of mallocs used during MatSetValues calls =0
> MUMPS run parameters:
>   SYM (matrix type):   2
>   PAR (host participation):1
>   ICNTL(1) (output for error): 6
>   ICNTL(2) (output of diagnostic msg): 0
>   ICNTL(3) (output for global info):   0
>   ICNTL(4) (level of printing):0
>   ICNTL(5) (input mat struct): 0
>   ICNTL(6) (matrix prescaling):7
>   ICNTL(7) (sequentia matrix ordering):7
>   ICNTL(8) (scalling strategy):77
>   ICNTL(10) (max num of refinements):  0
>   ICNTL(11) (error analysis):  0
>   ICNTL(12) (efficiency control): 0
>   ICNTL(13) (efficiency control): 0
>   ICNTL(14) (percentage of estimated workspace increase): 20
>   ICNTL(18) (input mat struct):   3
>   ICNTL(19) (Shur complement info):   0
>

Re: [petsc-users] Example for MatInvertBlockDiagonal

2016-09-19 Thread Barry Smith

> On Sep 19, 2016, at 2:21 PM, Cyrill Vonplanta  
> wrote:
> 
> 
>>  block size > 1 really only makes sense if the block size is really greater 
>> than one. So if A has blocks of size 3 you should create A as BAIJ and thus 
>> never need to call the convert routine.
> 
> Unfortunately A is not created by my part of the program and comes with 
> blocksize 1.

   Ok, copy the code for MatInvertBlockDiagonal_SeqAIJ() into your source code 
with a different name and modify it to serve your purpose and call it directly 
instead of calling MatInvertBlockDiagonal

> 
>> 
>>  You can also set the block size for AIJ matrix to 3 and use 
>> MatInvertBlockDiagonal() on that matrix and not use the BAIJ matrix. 
> 
> If I run:
> ierr = MatSetBlockSize(A, 3);  CHKERRQ(ierr);
> 
> It doesn’t work for me. I get:
> 
> [1;31m[0]PETSC ERROR: - Error Message 
> --
> [0;39m[0;49m[0]PETSC ERROR: Arguments are incompatible
> [0]PETSC ERROR: Cannot change block size 1 to 3
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 
> .
> .
> 
> What are the constraints for block size? 

   You need to set it early in the life of the matrix.

  Barry

> 
> 
> 
> 



Re: [petsc-users] Example for MatInvertBlockDiagonal

2016-09-19 Thread Cyrill Vonplanta

>   block size > 1 really only makes sense if the block size is really greater 
> than one. So if A has blocks of size 3 you should create A as BAIJ and thus 
> never need to call the convert routine.

Unfortunately A is not created by my part of the program and comes with 
blocksize 1.

>
>   You can also set the block size for AIJ matrix to 3 and use 
> MatInvertBlockDiagonal() on that matrix and not use the BAIJ matrix. 

If I run:
ierr = MatSetBlockSize(A, 3);  CHKERRQ(ierr);

It doesn’t work for me. I get:

[1;31m[0]PETSC ERROR: - Error Message 
--
[0;39m[0;49m[0]PETSC ERROR: Arguments are incompatible
[0]PETSC ERROR: Cannot change block size 1 to 3
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 
.
.

What are the constraints for blocksize? 






Re: [petsc-users] Question about PETScSF usage in DMNetwork/DMPlex

2016-09-19 Thread Adrian Maldonado
Ok, got it!

Thanks

On Mon, Sep 19, 2016 at 6:25 AM, Matthew Knepley  wrote:

> On Fri, Sep 16, 2016 at 11:36 AM, Adrian Maldonado 
> wrote:
>
>> Hi,
>>
>> I am trying to understand some of the data structures DMPlex/DMNetwork
>> creates and the relationship among them.
>>
>> As an example, I have an small test circuit (/src/ksp/ksp/examples/tutoria
>> ls/network/ex1.c).
>>
>> This is a graph that consists on 6 edges and 4 vertices, each one of
>> those having one degree of freedom.  When ran with two processors, each
>> rank will own 3 edges. Rank 0 will own one vertex (3 ghost) and Rank 1 will
>> own 3 vertices.
>>
>> These are some data structures for this problem. I am getting these data
>> structures inside DMNetworkDistribute
>> 
>>
>> DM Object: Parallel Mesh 2 MPI processes
>>   type: plex
>> Parallel Mesh in 1 dimensions:
>>   0-cells: 4 3
>>   1-cells: 3 3
>> Labels:
>>   depth: 2 strata of sizes (4, 3)
>>
>> This, as  I understand, is printing a tree with all the vertices and
>> edges in each processor (owned and ghost).
>>
>> PetscSection Object: 2 MPI processes
>>   type not yet set
>> Process 0:
>>   (   0) dim  1 offset   0
>>   (   1) dim  1 offset   1
>>   (   2) dim  1 offset   2
>>   (   3) dim  1 offset   3
>>   (   4) dim -2 offset  -8
>>   (   5) dim -2 offset  -9
>>   (   6) dim -2 offset -10
>> Process 1:
>>   (   0) dim  1 offset   4
>>   (   1) dim  1 offset   5
>>   (   2) dim  1 offset   6
>>   (   3) dim  1 offset   7
>>   (   4) dim  1 offset   8
>>   (   5) dim  1 offset   9
>>
>> This is a global PETSc section that gives me the global numbering for the
>> owned points and (garbage?) negative values for ghost.
>>
>> Until here everything is good. But then I print the PetscSF that is
>> created by  'DMPlexDistribute'. This I do not understand:
>>
>
> 1) You are looking at the MigrationSF, not the eventual PointSF or
> OffsetSF from the DM. You need
>
>   DMGetDefaultSF or DMGetPointSF
>
> for those.
>
> 2) Notice that edges 0,1,3 are sent to proc 0, and edges 2,4,5 are sent to
> proc 1.
>
>Matt
>
>
>> PetscSF Object: Migration SF 2 MPI processes
>>   type: basic
>> sort=rank-order
>>   [0] Number of roots=10, leaves=7, remote ranks=1
>>   [0] 0 <- (0,0)
>>   [0] 1 <- (0,1)
>>   [0] 2 <- (0,3)
>>   [0] 3 <- (0,6)
>>   [0] 4 <- (0,7)
>>   [0] 5 <- (0,8)
>>   [0] 6 <- (0,9)
>>   [1] Number of roots=0, leaves=6, remote ranks=1
>>   [1] 0 <- (0,2)
>>   [1] 1 <- (0,4)
>>   [1] 2 <- (0,5)
>>   [1] 3 <- (0,7)
>>   [1] 4 <- (0,8)
>>   [1] 5 <- (0,9)
>>   [0] Roots referenced by my leaves, by rank
>>   [0] 0: 7 edges
>>   [0]0 <- 0
>>   [0]1 <- 1
>>   [0]2 <- 3
>>   [0]3 <- 6
>>   [0]4 <- 7
>>   [0]5 <- 8
>>   [0]6 <- 9
>>   [1] Roots referenced by my leaves, by rank
>>   [1] 0: 6 edges
>>   [1]0 <- 2
>>   [1]1 <- 4
>>   [1]2 <- 5
>>   [1]3 <- 7
>>   [1]4 <- 8
>>   [1]5 <- 9
>>
>> I understand that SF is a data structure that saves references to pieces
>> of data that are now owned by the process (https://arxiv.org/pdf/1506.06
>> 194v1.pdf, page 4).
>>
>> Since the only ghost nodes appear in rank 0 (three ghost vertices) I
>> would expect something like:
>> *rank 0:*
>>   4 - (1, 3)  (to read: point 4 is owned by rank 1 and is rank's 1 point
>> 3)
>>   etc...
>> *rank 1:*
>>   nothing
>>
>> Is my intuition correct? If so, what does the star forest that I get from
>> DMPlexDistribute mean? I am printing the wrong thing?
>>
>> Thank you
>>
>> --
>> D. Adrian Maldonado, PhD Candidate
>> Electrical & Computer Engineering Dept.
>> Illinois Institute of Technology
>> 3301 S. Dearborn Street, Chicago, IL 60616
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>



-- 
D. Adrian Maldonado, PhD Candidate
Electrical & Computer Engineering Dept.
Illinois Institute of Technology
3301 S. Dearborn Street, Chicago, IL 60616


[petsc-users] Issue updating MUMPS ictnl after failed solve

2016-09-19 Thread David Knezevic
When I use MUMPS via PETSc, one issue is that it can sometimes fail with
MUMPS error -9, which means that MUMPS didn't allocate a big enough
workspace. This can typically be fixed by increasing MUMPS icntl 14, e.g.
via the command line option -mat_mumps_icntl_14.

However, instead of having to run several times with different command line
options, I'd like to be able to automatically increment icntl 14 value in a
loop until the solve succeeds.

I have a saved matrix which fails when I use it for a solve with MUMPS with
4 MPI processes and the default ictnl values, so I'm using this to check
that I can achieve the automatic icntl 14 update, as described above. (The
matrix is 14MB so I haven't attached it here, but I'd be happy to send it
to anyone else who wants to try this test case out.)

I've pasted some test code below which provides a simple test of this idea
using two solves. The first solve uses the default value of icntl 14, which
fails, and then we update icntl 14 to 30 and solve again. The second solve
should succeed since icntl 14 of 30 is sufficient for MUMPS to succeed in
this case, but for some reason the second solve still fails.

Below I've also pasted the output from -ksp_view, and you can see that
ictnl 14 is being updated correctly (see the ICNTL(14) lines in the
output), so it's not clear to me why the second solve fails. It seems like
MUMPS is ignoring the update to the ictnl value?

Thanks,
David


-
Test code:

  Mat A;
  MatCreate(PETSC_COMM_WORLD,);
  MatSetType(A,MATMPIAIJ);

  PetscViewer petsc_viewer;
  PetscViewerBinaryOpen( PETSC_COMM_WORLD,
 "matrix.dat",
 FILE_MODE_READ,
 _viewer);
  MatLoad(A, petsc_viewer);
  PetscViewerDestroy(_viewer);

  PetscInt m, n;
  MatGetSize(A, , );

  Vec x;
  VecCreate(PETSC_COMM_WORLD,);
  VecSetSizes(x,PETSC_DECIDE,m);
  VecSetFromOptions(x);
  VecSet(x,1.0);

  Vec b;
  VecDuplicate(x,);

  KSP ksp;
  PC pc;

  KSPCreate(PETSC_COMM_WORLD,);
  KSPSetOperators(ksp,A,A);

  KSPSetType(ksp,KSPPREONLY);
  KSPGetPC(ksp,);

  PCSetType(pc,PCCHOLESKY);

  PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS);
  PCFactorSetUpMatSolverPackage(pc);

  KSPSetFromOptions(ksp);
  KSPSetUp(ksp);

  KSPSolve(ksp,b,x);

  {
KSPConvergedReason reason;
KSPGetConvergedReason(ksp, );
std::cout << "converged reason: " << reason << std::endl;
  }

  Mat F;
  PCFactorGetMatrix(pc,);
  MatMumpsSetIcntl(F,14,30);

  KSPSolve(ksp,b,x);

  {
KSPConvergedReason reason;
KSPGetConvergedReason(ksp, );
std::cout << "converged reason: " << reason << std::endl;
  }


-
-ksp_view output (ICNTL(14) changes from 20 to 30, but we get "converged
reason: -11" for both solves)

KSP Object: 4 MPI processes
  type: preonly
  maximum iterations=1, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
  left preconditioning
  using NONE norm type for convergence test
PC Object: 4 MPI processes
  type: cholesky
Cholesky: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 0., needed 0.
  Factored matrix follows:
Mat Object: 4 MPI processes
  type: mpiaij
  rows=22878, cols=22878
  package used to perform factorization: mumps
  total: nonzeros=3361617, allocated nonzeros=3361617
  total number of mallocs used during MatSetValues calls =0
MUMPS run parameters:
  SYM (matrix type):   2
  PAR (host participation):1
  ICNTL(1) (output for error): 6
  ICNTL(2) (output of diagnostic msg): 0
  ICNTL(3) (output for global info):   0
  ICNTL(4) (level of printing):0
  ICNTL(5) (input mat struct): 0
  ICNTL(6) (matrix prescaling):7
  ICNTL(7) (sequentia matrix ordering):7
  ICNTL(8) (scalling strategy):77
  ICNTL(10) (max num of refinements):  0
  ICNTL(11) (error analysis):  0
  ICNTL(12) (efficiency control): 0
  ICNTL(13) (efficiency control): 0
  ICNTL(14) (percentage of estimated workspace increase): 20
  ICNTL(18) (input mat struct):   3
  ICNTL(19) (Shur complement info):   0
  ICNTL(20) (rhs sparse pattern): 0
  ICNTL(21) (solution struct):1
  ICNTL(22) (in-core/out-of-core facility):   0
  ICNTL(23) (max size of memory can be allocated locally):0
   

Re: [petsc-users] Example for MatInvertBlockDiagonal

2016-09-19 Thread Barry Smith

> On Sep 19, 2016, at 10:43 AM, Cyrill Vonplanta  
> wrote:
> 
> Barry,
> 
> Thanks a lot. I’d like to use this for a nonlinear variant of a 
> block-gauss-seidel smoother. I would like to use MatInvertBlockDiagonal for 
> speeding up my variant.
> 
> I think I can work with this, however I also have the problem to turn my 
> initial matrix into one with a blocksize of 3.When I call:
> 
> MatConvert(A, MATBAIJ, MAT_INITIAL_MATRIX, Dinverse);

   MatConvert() creates a new matrix so you should not create the Dinverse 
beforehand; anything you put in Dinverse before is lost..m 

   block size > 1 really only makes sense if the block size is really greater 
than one. So if A has blocks of size 3 you should create A as BAIJ and thus 
never need to call the convert routine.

   You can also set the block size for AIJ matrix to 3 and use 
MatInvertBlockDiagonal() on that matrix and not use the BAIJ matrix. 

   Finally note that MatInvertBlockDiagonal() ends up calling (for block size 
3) PetscKernel_A_gets_inverse_A_3() for each block. It sounds to me like that 
is what you would want for nonlinear variant of a block-gauss-seidel smoother..

   Barry

> 
> 
> Then the matrix Dinverse has blocksize 1 which comes from A. I checked the 
> blocksize before the conversion and it was 3, so it seems to get lost.
> 
> What is the correct (and elegant) way to turn a matrix into a block matrix?
> 
> 
> Best
> Cyrill
> 
> 
> 
> On 19/09/16 17:18, "Barry Smith"  wrote:
> 
>> 
>> Cyrill,
>> 
>>   This is very specialized for implementing point block Jacobi; I don't 
>> think it is something you would want to use directly. 
>> 
>>If you do want to use it, it simply returns the inverses of the block 
>> diagonals in column major form. You can then call MatSetValues() with with 
>> each of those blocks into another PETSc matrix. values[i*bs*bs] is the 
>> starting point of each block in the array.
>> 
>>  Barry
>> 
>>> On Sep 19, 2016, at 4:55 AM, Cyrill Vonplanta  
>>> wrote:
>>> 
>>> Dear PETSc-Users,
>>> 
>>> I would like to use the inverted block diagonals of a a matrix. I have seen 
>>> the function MatInvertBlockDiagonal() but I don’t know how to create a 
>>> matrix out of them or an array of block matrizes.
>>> 
>>> Does anyone have an example on how to use  **values to create a PETSc 
>>> matrix?
>>> 
>>> Thanks
>>> Cyrill
>> 



Re: [petsc-users] Example for MatInvertBlockDiagonal

2016-09-19 Thread Cyrill Vonplanta
Barry,

Thanks a lot. I’d like to use this for a nonlinear variant of a 
block-gauss-seidel smoother. I would like to use MatInvertBlockDiagonal for 
speeding up my variant.

I think I can work with this, however I also have the problem to turn my 
initial matrix into one with a blocksize of 3.When I call:

MatConvert(A, MATBAIJ, MAT_INITIAL_MATRIX, Dinverse);


Then the matrix Dinverse has blocksize 1 which comes from A. I checked the 
blocksize before the conversion and it was 3, so it seems to get lost.

What is the correct (and elegant) way to turn a matrix into a block matrix?


Best
Cyrill



On 19/09/16 17:18, "Barry Smith"  wrote:

>
>  Cyrill,
>
>This is very specialized for implementing point block Jacobi; I don't 
> think it is something you would want to use directly. 
>
> If you do want to use it, it simply returns the inverses of the block 
> diagonals in column major form. You can then call MatSetValues() with with 
> each of those blocks into another PETSc matrix. values[i*bs*bs] is the 
> starting point of each block in the array.
>
>   Barry
>
>> On Sep 19, 2016, at 4:55 AM, Cyrill Vonplanta  
>> wrote:
>> 
>> Dear PETSc-Users,
>> 
>> I would like to use the inverted block diagonals of a a matrix. I have seen 
>> the function MatInvertBlockDiagonal() but I don’t know how to create a 
>> matrix out of them or an array of block matrizes.
>> 
>> Does anyone have an example on how to use  **values to create a PETSc matrix?
>> 
>> Thanks
>> Cyrill
>


Re: [petsc-users] Example for MatInvertBlockDiagonal

2016-09-19 Thread Barry Smith

  Cyrill,

This is very specialized for implementing point block Jacobi; I don't think 
it is something you would want to use directly. 

 If you do want to use it, it simply returns the inverses of the block 
diagonals in column major form. You can then call MatSetValues() with with each 
of those blocks into another PETSc matrix. values[i*bs*bs] is the starting 
point of each block in the array.

   Barry

> On Sep 19, 2016, at 4:55 AM, Cyrill Vonplanta  
> wrote:
> 
> Dear PETSc-Users,
> 
> I would like to use the inverted block diagonals of a a matrix. I have seen 
> the function MatInvertBlockDiagonal() but I don’t know how to create a matrix 
> out of them or an array of block matrizes.
> 
> Does anyone have an example on how to use  **values to create a PETSc matrix?
> 
> Thanks
> Cyrill



Re: [petsc-users] Does Petsc support matrix diagonalization

2016-09-19 Thread Hong
丁老师:

> Dear friends:
> I want to diagonalize matrix D:
> D=PAP^(-1).
> where A is the diagonal matrix , P is the transformation matrix.
> Does Petsc has this routine to perform this task.
>

This is an eigenvalu/singular value decomposition of D. For dense D, you
can use Elemental, for sparse D, use Slepc.

Hong

>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: [petsc-users] Does Petsc support matrix diagonalization

2016-09-19 Thread Matthew Knepley
On Mon, Sep 19, 2016 at 7:15 AM, 丁老师  wrote:

> Dear friends:
> I want to diagonalize matrix D:
> D=PAP^(-1).
> where A is the diagonal matrix , P is the transformation matrix.
> Does Petsc has this routine to perform this task.
>

No, you should check out

  http://slepc.upv.es/

  Thanks,

 Matt


> Regards
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


[petsc-users] Does Petsc support matrix diagonalization

2016-09-19 Thread 丁老师
Dear friends:
I want to diagonalize matrix D:
D=PAP^(-1).  
where A is the diagonal matrix , P is the transformation matrix.
Does Petsc has this routine to perform this task.
Regards



 





 





 





 





 

Re: [petsc-users] Question about PETScSF usage in DMNetwork/DMPlex

2016-09-19 Thread Matthew Knepley
On Fri, Sep 16, 2016 at 11:36 AM, Adrian Maldonado 
wrote:

> Hi,
>
> I am trying to understand some of the data structures DMPlex/DMNetwork
> creates and the relationship among them.
>
> As an example, I have an small test circuit (/src/ksp/ksp/examples/
> tutorials/network/ex1.c).
>
> This is a graph that consists on 6 edges and 4 vertices, each one of those
> having one degree of freedom.  When ran with two processors, each rank will
> own 3 edges. Rank 0 will own one vertex (3 ghost) and Rank 1 will own 3
> vertices.
>
> These are some data structures for this problem. I am getting these data
> structures inside DMNetworkDistribute
> 
>
> DM Object: Parallel Mesh 2 MPI processes
>   type: plex
> Parallel Mesh in 1 dimensions:
>   0-cells: 4 3
>   1-cells: 3 3
> Labels:
>   depth: 2 strata of sizes (4, 3)
>
> This, as  I understand, is printing a tree with all the vertices and edges
> in each processor (owned and ghost).
>
> PetscSection Object: 2 MPI processes
>   type not yet set
> Process 0:
>   (   0) dim  1 offset   0
>   (   1) dim  1 offset   1
>   (   2) dim  1 offset   2
>   (   3) dim  1 offset   3
>   (   4) dim -2 offset  -8
>   (   5) dim -2 offset  -9
>   (   6) dim -2 offset -10
> Process 1:
>   (   0) dim  1 offset   4
>   (   1) dim  1 offset   5
>   (   2) dim  1 offset   6
>   (   3) dim  1 offset   7
>   (   4) dim  1 offset   8
>   (   5) dim  1 offset   9
>
> This is a global PETSc section that gives me the global numbering for the
> owned points and (garbage?) negative values for ghost.
>
> Until here everything is good. But then I print the PetscSF that is
> created by  'DMPlexDistribute'. This I do not understand:
>

1) You are looking at the MigrationSF, not the eventual PointSF or OffsetSF
from the DM. You need

  DMGetDefaultSF or DMGetPointSF

for those.

2) Notice that edges 0,1,3 are sent to proc 0, and edges 2,4,5 are sent to
proc 1.

   Matt


> PetscSF Object: Migration SF 2 MPI processes
>   type: basic
> sort=rank-order
>   [0] Number of roots=10, leaves=7, remote ranks=1
>   [0] 0 <- (0,0)
>   [0] 1 <- (0,1)
>   [0] 2 <- (0,3)
>   [0] 3 <- (0,6)
>   [0] 4 <- (0,7)
>   [0] 5 <- (0,8)
>   [0] 6 <- (0,9)
>   [1] Number of roots=0, leaves=6, remote ranks=1
>   [1] 0 <- (0,2)
>   [1] 1 <- (0,4)
>   [1] 2 <- (0,5)
>   [1] 3 <- (0,7)
>   [1] 4 <- (0,8)
>   [1] 5 <- (0,9)
>   [0] Roots referenced by my leaves, by rank
>   [0] 0: 7 edges
>   [0]0 <- 0
>   [0]1 <- 1
>   [0]2 <- 3
>   [0]3 <- 6
>   [0]4 <- 7
>   [0]5 <- 8
>   [0]6 <- 9
>   [1] Roots referenced by my leaves, by rank
>   [1] 0: 6 edges
>   [1]0 <- 2
>   [1]1 <- 4
>   [1]2 <- 5
>   [1]3 <- 7
>   [1]4 <- 8
>   [1]5 <- 9
>
> I understand that SF is a data structure that saves references to pieces
> of data that are now owned by the process (https://arxiv.org/pdf/1506.
> 06194v1.pdf, page 4).
>
> Since the only ghost nodes appear in rank 0 (three ghost vertices) I would
> expect something like:
> *rank 0:*
>   4 - (1, 3)  (to read: point 4 is owned by rank 1 and is rank's 1 point 3)
>   etc...
> *rank 1:*
>   nothing
>
> Is my intuition correct? If so, what does the star forest that I get from
> DMPlexDistribute mean? I am printing the wrong thing?
>
> Thank you
>
> --
> D. Adrian Maldonado, PhD Candidate
> Electrical & Computer Engineering Dept.
> Illinois Institute of Technology
> 3301 S. Dearborn Street, Chicago, IL 60616
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] Question about PETScSF usage in DMNetwork/DMPlex

2016-09-19 Thread Matthew Knepley
On Fri, Sep 16, 2016 at 12:54 PM, Adrian Maldonado 
wrote:

> Just one addition about one thing I've noticed.
>
> The section:
>
> PetscSection Object: 2 MPI processes
>   type not yet set
> Process 0:
>   (   0) dim  1 offset   0
>   (   1) dim  1 offset   1
>   (   2) dim  1 offset   2
>   (   3) dim  1 offset   3
>   (   4) dim -2 offset  -8
>   (   5) dim -2 offset  -9
>   (   6) dim -2 offset -10
> Process 1:
>   (   0) dim  1 offset   4
>   (   1) dim  1 offset   5
>   (   2) dim  1 offset   6
>   (   3) dim  1 offset   7
>   (   4) dim  1 offset   8
>   (   5) dim  1 offset   9
>
> For the ghost values 4, 5, 6... is encoding the ghost values as rank =
> -(-2 + 1) and offset = -(-8 + 1) ?
>

Yes, the encoding is

  -(val + 1)

Notice that this is reversible. Say my value is v, then my ghost value is

  -(v + 1)

and I can get my original value using

 -(-(v+1) + 1)

= -(-v - 1 + 1) = v

   Matt


> On Fri, Sep 16, 2016 at 11:36 AM, Adrian Maldonado 
> wrote:
>
>> Hi,
>>
>> I am trying to understand some of the data structures DMPlex/DMNetwork
>> creates and the relationship among them.
>>
>> As an example, I have an small test circuit (/src/ksp/ksp/examples/tutoria
>> ls/network/ex1.c).
>>
>> This is a graph that consists on 6 edges and 4 vertices, each one of
>> those having one degree of freedom.  When ran with two processors, each
>> rank will own 3 edges. Rank 0 will own one vertex (3 ghost) and Rank 1 will
>> own 3 vertices.
>>
>> These are some data structures for this problem. I am getting these data
>> structures inside DMNetworkDistribute
>> 
>>
>> DM Object: Parallel Mesh 2 MPI processes
>>   type: plex
>> Parallel Mesh in 1 dimensions:
>>   0-cells: 4 3
>>   1-cells: 3 3
>> Labels:
>>   depth: 2 strata of sizes (4, 3)
>>
>> This, as  I understand, is printing a tree with all the vertices and
>> edges in each processor (owned and ghost).
>>
>> PetscSection Object: 2 MPI processes
>>   type not yet set
>> Process 0:
>>   (   0) dim  1 offset   0
>>   (   1) dim  1 offset   1
>>   (   2) dim  1 offset   2
>>   (   3) dim  1 offset   3
>>   (   4) dim -2 offset  -8
>>   (   5) dim -2 offset  -9
>>   (   6) dim -2 offset -10
>> Process 1:
>>   (   0) dim  1 offset   4
>>   (   1) dim  1 offset   5
>>   (   2) dim  1 offset   6
>>   (   3) dim  1 offset   7
>>   (   4) dim  1 offset   8
>>   (   5) dim  1 offset   9
>>
>> This is a global PETSc section that gives me the global numbering for the
>> owned points and (garbage?) negative values for ghost.
>>
>> Until here everything is good. But then I print the PetscSF that is
>> created by  'DMPlexDistribute'. This I do not understand:
>>
>> PetscSF Object: Migration SF 2 MPI processes
>>   type: basic
>> sort=rank-order
>>   [0] Number of roots=10, leaves=7, remote ranks=1
>>   [0] 0 <- (0,0)
>>   [0] 1 <- (0,1)
>>   [0] 2 <- (0,3)
>>   [0] 3 <- (0,6)
>>   [0] 4 <- (0,7)
>>   [0] 5 <- (0,8)
>>   [0] 6 <- (0,9)
>>   [1] Number of roots=0, leaves=6, remote ranks=1
>>   [1] 0 <- (0,2)
>>   [1] 1 <- (0,4)
>>   [1] 2 <- (0,5)
>>   [1] 3 <- (0,7)
>>   [1] 4 <- (0,8)
>>   [1] 5 <- (0,9)
>>   [0] Roots referenced by my leaves, by rank
>>   [0] 0: 7 edges
>>   [0]0 <- 0
>>   [0]1 <- 1
>>   [0]2 <- 3
>>   [0]3 <- 6
>>   [0]4 <- 7
>>   [0]5 <- 8
>>   [0]6 <- 9
>>   [1] Roots referenced by my leaves, by rank
>>   [1] 0: 6 edges
>>   [1]0 <- 2
>>   [1]1 <- 4
>>   [1]2 <- 5
>>   [1]3 <- 7
>>   [1]4 <- 8
>>   [1]5 <- 9
>>
>> I understand that SF is a data structure that saves references to pieces
>> of data that are now owned by the process (https://arxiv.org/pdf/1506.06
>> 194v1.pdf, page 4).
>>
>> Since the only ghost nodes appear in rank 0 (three ghost vertices) I
>> would expect something like:
>> *rank 0:*
>>   4 - (1, 3)  (to read: point 4 is owned by rank 1 and is rank's 1 point
>> 3)
>>   etc...
>> *rank 1:*
>>   nothing
>>
>> Is my intuition correct? If so, what does the star forest that I get from
>> DMPlexDistribute mean? I am printing the wrong thing?
>>
>> Thank you
>>
>> --
>> D. Adrian Maldonado, PhD Candidate
>> Electrical & Computer Engineering Dept.
>> Illinois Institute of Technology
>> 3301 S. Dearborn Street, Chicago, IL 60616
>>
>
>
>
> --
> D. Adrian Maldonado, PhD Candidate
> Electrical & Computer Engineering Dept.
> Illinois Institute of Technology
> 3301 S. Dearborn Street, Chicago, IL 60616
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


[petsc-users] Example for MatInvertBlockDiagonal

2016-09-19 Thread Cyrill Vonplanta
Dear PETSc-Users,

I would like to use the inverted block diagonals of a a matrix. I have seen the 
function MatInvertBlockDiagonal() but I don’t know how to create a matrix out 
of them or an array of block matrizes.

Does anyone have an example on how to use  **values to create a PETSc matrix?

Thanks
Cyrill