Re: [petsc-dev] fortran compilation error

2024-05-04 Thread Jose E. Roman via petsc-dev




 There have been changes in the fortran stubs recently. Try regenerating the fortran stubs with "make deletefortranstubs" followed by "make allfortranstubs". If this does not work, try removing the $PETSC_ARCH directory and configure/make again. 




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




There have been changes in the fortran stubs recently. Try regenerating the fortran stubs with "make deletefortranstubs" followed by "make allfortranstubs". If this does not work, try removing the $PETSC_ARCH directory and configure/make again.

Jose

> El 4 may 2024, a las 8:17, Ravindra Chopade  escribió:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi All,
>  
> Fortran compilation started failing with below error after resolving conflicts for my changes Merge Request 7513, even though my changes does not contain any fortran changes
> 
> 
> 
> I am not a fortran guy, can someone help to understand and fix this issue ?
> 
> Thanks
> Ravi 




Re: [petsc-dev] Issue with -log_view

2023-05-09 Thread Jose E. Roman
https://gitlab.com/petsc/petsc/-/merge_requests/6440

> El 9 may 2023, a las 15:31, Matthew Knepley  escribió:
> 
> On Tue, May 9, 2023 at 9:04 AM Jose E. Roman  wrote:
> I found the bug: the event MAT_MultHermitianTranspose is used but not 
> registered.
> I will create a MR.
> 
> Great!
> 
>   Thanks,
> Matt
>  
> Thanks Matt.
> 
> > El 9 may 2023, a las 14:50, Matthew Knepley  escribió:
> > 
> > On Tue, May 9, 2023 at 8:41 AM Jose E. Roman  wrote:
> > But MatCreateShell() calls MatInitializePackage() (via MatCreate()) and 
> > also the main program creates a regular Mat. The events should have been 
> > registered by the time the shell matrix operations are invoked.
> > 
> > The reason I say this is that PetscBarrier is the _first_ event, so if an 
> > event is called without initializing, it
> > will show up as PetscBarrier. Maybe break in MatMultTranspose, to see who 
> > is calling it first?
> > 
> >   Thanks,
> > 
> >     Matt
> >  
> > > El 9 may 2023, a las 14:13, Matthew Knepley  escribió:
> > > 
> > > On Tue, May 9, 2023 at 7:17 AM Jose E. Roman  wrote:
> > > Hi.
> > > 
> > > We are seeing a strange thing in the -log_view output with one of the 
> > > SLEPc solvers. It is probably an issue with SLEPc, but we don't know how 
> > > to debug it.
> > > 
> > > It can be reproduced for instance with
> > > 
> > >  $ ./ex45 -m 15 -n 20 -p 21 -svd_nsv 4 -svd_ncv 9 -log_view
> > > 
> > > The log_view events are listed at the end of this email. The first one 
> > > (PetscBarrier) is wrong, because PetscBarrier is never called, if I place 
> > > a breakpoint in PetscBarrier() it will never be hit. Also, in that event 
> > > it reports some nonzero Mflop/s, which suggests that it corresponds to 
> > > another event (not PetscBarrier). Furthermore, the count of the 
> > > PetscBarrier event always matches the count of MatMultTranspose, so there 
> > > must be a connection.
> > > 
> > > Does anyone have suggestions how to address this?
> > > 
> > > Hi Jose,
> > > 
> > > Here is my guess. PETSc sets all of the event ids (using Register) when 
> > > the dynamic libraries get loaded. If they are not loaded,
> > > then the library initialization function is called when some function 
> > > from that library is used. My guess is that we put this init check
> > > in MatCreate(), but that is not called when you create your shell matrix 
> > > and thus the events are not initialized correctly for you until
> > > later. Can you check?
> > > 
> > >   Thanks,
> > > 
> > >  Matt
> > >  
> > > Note: this is with 1 MPI process.
> > > Note: the solver creates a shell matrix with MATOP_MULT_TRANSPOSE.
> > > 
> > > Thanks.
> > > Jose
> > > 
> > > 
> > > PetscBarrier  16 1.0 7.5579e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
> > > 0.0e+00  0  0  0  0  0   0  0  0  0  0 6
> > > MatMult   82 1.0 6.1590e-01 1.0 9.25e+05 1.0 0.0e+00 0.0e+00 
> > > 0.0e+00 81 97  0  0  0  81 97  0  0  0 2
> > > MatMultTranspose  16 1.0 7.4625e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
> > > 0.0e+00  0  0  0  0  0   0  0  0  0  0 6
> > > MatAssemblyBegin   4 1.0 5.2452e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > > MatAssemblyEnd 4 1.0 2.8920e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > > MatTranspose   2 1.0 1.6265e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > > SVDSetUp   1 1.0 1.8686e-02 1.0 2.72e+02 1.0 0.0e+00 0.0e+00 
> > > 0.0e+00  2  0  0  0  0   2  0  0  0  0 0
> > > SVDSolve   1 1.0 5.5965e-01 1.0 7.95e+05 1.0 0.0e+00 0.0e+00 
> > > 0.0e+00 74 83  0  0  0  74 83  0  0  0 1
> > > EPSSetUp   1 1.0 9.5146e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > > 0.0e+00  1  0  0  0  0   1  0  0  0  0 0
> > > EPSSolve   1 1.0 5.4082e-01 1.0 7.94e+05 1.0 0.0e+00 0.0e+00 
> > > 0.0e+00 71 83  0  0  0  71 83  0  0  0 1
> > > STSetUp1 1.0 4.8406e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > > 0.0e+00  1  0  0  0  0   1  0  0  0  0 0
> > > STComputeOperatr   1 1.0 1.2653e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > > 0.0e+00  0  0  0  0  0   0  

Re: [petsc-dev] Issue with -log_view

2023-05-09 Thread Jose E. Roman
I found the bug: the event MAT_MultHermitianTranspose is used but not 
registered.
I will create a MR.

Thanks Matt.

> El 9 may 2023, a las 14:50, Matthew Knepley  escribió:
> 
> On Tue, May 9, 2023 at 8:41 AM Jose E. Roman  wrote:
> But MatCreateShell() calls MatInitializePackage() (via MatCreate()) and also 
> the main program creates a regular Mat. The events should have been 
> registered by the time the shell matrix operations are invoked.
> 
> The reason I say this is that PetscBarrier is the _first_ event, so if an 
> event is called without initializing, it
> will show up as PetscBarrier. Maybe break in MatMultTranspose, to see who is 
> calling it first?
> 
>   Thanks,
> 
> Matt
>  
> > El 9 may 2023, a las 14:13, Matthew Knepley  escribió:
> > 
> > On Tue, May 9, 2023 at 7:17 AM Jose E. Roman  wrote:
> > Hi.
> > 
> > We are seeing a strange thing in the -log_view output with one of the SLEPc 
> > solvers. It is probably an issue with SLEPc, but we don't know how to debug 
> > it.
> > 
> > It can be reproduced for instance with
> > 
> >  $ ./ex45 -m 15 -n 20 -p 21 -svd_nsv 4 -svd_ncv 9 -log_view
> > 
> > The log_view events are listed at the end of this email. The first one 
> > (PetscBarrier) is wrong, because PetscBarrier is never called, if I place a 
> > breakpoint in PetscBarrier() it will never be hit. Also, in that event it 
> > reports some nonzero Mflop/s, which suggests that it corresponds to another 
> > event (not PetscBarrier). Furthermore, the count of the PetscBarrier event 
> > always matches the count of MatMultTranspose, so there must be a connection.
> > 
> > Does anyone have suggestions how to address this?
> > 
> > Hi Jose,
> > 
> > Here is my guess. PETSc sets all of the event ids (using Register) when the 
> > dynamic libraries get loaded. If they are not loaded,
> > then the library initialization function is called when some function from 
> > that library is used. My guess is that we put this init check
> > in MatCreate(), but that is not called when you create your shell matrix 
> > and thus the events are not initialized correctly for you until
> > later. Can you check?
> > 
> >   Thanks,
> > 
> >  Matt
> >  
> > Note: this is with 1 MPI process.
> > Note: the solver creates a shell matrix with MATOP_MULT_TRANSPOSE.
> > 
> > Thanks.
> > Jose
> > 
> > 
> > PetscBarrier  16 1.0 7.5579e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 6
> > MatMult   82 1.0 6.1590e-01 1.0 9.25e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 81 97  0  0  0  81 97  0  0  0 2
> > MatMultTranspose  16 1.0 7.4625e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 6
> > MatAssemblyBegin   4 1.0 5.2452e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > MatAssemblyEnd 4 1.0 2.8920e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > MatTranspose   2 1.0 1.6265e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > SVDSetUp   1 1.0 1.8686e-02 1.0 2.72e+02 1.0 0.0e+00 0.0e+00 
> > 0.0e+00  2  0  0  0  0   2  0  0  0  0 0
> > SVDSolve   1 1.0 5.5965e-01 1.0 7.95e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 74 83  0  0  0  74 83  0  0  0 1
> > EPSSetUp   1 1.0 9.5146e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  1  0  0  0  0   1  0  0  0  0 0
> > EPSSolve   1 1.0 5.4082e-01 1.0 7.94e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 71 83  0  0  0  71 83  0  0  0 1
> > STSetUp1 1.0 4.8406e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  1  0  0  0  0   1  0  0  0  0 0
> > STComputeOperatr   1 1.0 1.2653e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > STApply   24 1.0 6.1569e-01 1.0 8.84e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 81 92  0  0  0  81 92  0  0  0 1
> > STMatSolve24 1.0 6.1556e-01 1.0 8.80e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 81 92  0  0  0  81 92  0  0  0 1
> > KSPSetUp   1 1.0 2.8465e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> > 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> > KSPSolve  24 1.0 6.1551e-01 1.0 8.80e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 81 92  0  0  0  81 92  0  0  0 1
> > KSPGMRESOrthog   480 1.0 4.3219e-01 1.0 3.98e+05 1.0 0.0e+00 0.0e+00 
> > 0.0e+00 57 42  0  0  0  57 42  0  0 

Re: [petsc-dev] Issue with -log_view

2023-05-09 Thread Jose E. Roman
But MatCreateShell() calls MatInitializePackage() (via MatCreate()) and also 
the main program creates a regular Mat. The events should have been registered 
by the time the shell matrix operations are invoked.


> El 9 may 2023, a las 14:13, Matthew Knepley  escribió:
> 
> On Tue, May 9, 2023 at 7:17 AM Jose E. Roman  wrote:
> Hi.
> 
> We are seeing a strange thing in the -log_view output with one of the SLEPc 
> solvers. It is probably an issue with SLEPc, but we don't know how to debug 
> it.
> 
> It can be reproduced for instance with
> 
>  $ ./ex45 -m 15 -n 20 -p 21 -svd_nsv 4 -svd_ncv 9 -log_view
> 
> The log_view events are listed at the end of this email. The first one 
> (PetscBarrier) is wrong, because PetscBarrier is never called, if I place a 
> breakpoint in PetscBarrier() it will never be hit. Also, in that event it 
> reports some nonzero Mflop/s, which suggests that it corresponds to another 
> event (not PetscBarrier). Furthermore, the count of the PetscBarrier event 
> always matches the count of MatMultTranspose, so there must be a connection.
> 
> Does anyone have suggestions how to address this?
> 
> Hi Jose,
> 
> Here is my guess. PETSc sets all of the event ids (using Register) when the 
> dynamic libraries get loaded. If they are not loaded,
> then the library initialization function is called when some function from 
> that library is used. My guess is that we put this init check
> in MatCreate(), but that is not called when you create your shell matrix and 
> thus the events are not initialized correctly for you until
> later. Can you check?
> 
>   Thanks,
> 
>  Matt
>  
> Note: this is with 1 MPI process.
> Note: the solver creates a shell matrix with MATOP_MULT_TRANSPOSE.
> 
> Thanks.
> Jose
> 
> 
> PetscBarrier  16 1.0 7.5579e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 6
> MatMult   82 1.0 6.1590e-01 1.0 9.25e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 81 97  0  0  0  81 97  0  0  0 2
> MatMultTranspose  16 1.0 7.4625e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 6
> MatAssemblyBegin   4 1.0 5.2452e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> MatAssemblyEnd 4 1.0 2.8920e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> MatTranspose   2 1.0 1.6265e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> SVDSetUp   1 1.0 1.8686e-02 1.0 2.72e+02 1.0 0.0e+00 0.0e+00 
> 0.0e+00  2  0  0  0  0   2  0  0  0  0 0
> SVDSolve   1 1.0 5.5965e-01 1.0 7.95e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 74 83  0  0  0  74 83  0  0  0 1
> EPSSetUp   1 1.0 9.5146e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  1  0  0  0  0   1  0  0  0  0 0
> EPSSolve   1 1.0 5.4082e-01 1.0 7.94e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 71 83  0  0  0  71 83  0  0  0 1
> STSetUp1 1.0 4.8406e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  1  0  0  0  0   1  0  0  0  0 0
> STComputeOperatr   1 1.0 1.2653e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> STApply   24 1.0 6.1569e-01 1.0 8.84e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 81 92  0  0  0  81 92  0  0  0 1
> STMatSolve24 1.0 6.1556e-01 1.0 8.80e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 81 92  0  0  0  81 92  0  0  0 1
> KSPSetUp   1 1.0 2.8465e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> KSPSolve  24 1.0 6.1551e-01 1.0 8.80e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 81 92  0  0  0  81 92  0  0  0 1
> KSPGMRESOrthog   480 1.0 4.3219e-01 1.0 3.98e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 57 42  0  0  0  57 42  0  0  0 1
> PCSetUp1 1.0 3.8147e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> PCApply  504 1.0 2.1615e-02 1.0 1.01e+04 1.0 0.0e+00 0.0e+00 
> 0.0e+00  3  1  0  0  0   3  1  0  0  0 0
> BVCopy27 1.0 3.0560e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 0
> BVMultVec 38 1.0 3.7060e-03 1.0 8.88e+03 1.0 0.0e+00 0.0e+00 
> 0.0e+00  0  1  0  0  0   0  1  0  0  0 2
> BVMultInPlace  3 1.0 1.0681e-04 1.0 4.32e+03 1.0 0.0e+00 0.0e+00 
> 0.0e+00  0  0  0  0  0   0  0  0  0  040
> BVDotVec  38 1.0 4.5121e-03 1.0 4.38e+04 1.0 0.0e+00 0.0e+00 
> 0.0e+00  1  5  0  0  0   1  5  0  0  010
> BVOrthogonalizeV  20 1.0 1.3182e-02 1.0 5.33e+04 1.0 0.0e+00 0.0e+00 
> 0.0e+00  2  6  0  0  0   2  6  0  0  0 4
> BVScale  

[petsc-dev] Issue with -log_view

2023-05-09 Thread Jose E. Roman
Hi.

We are seeing a strange thing in the -log_view output with one of the SLEPc 
solvers. It is probably an issue with SLEPc, but we don't know how to debug it.

It can be reproduced for instance with

 $ ./ex45 -m 15 -n 20 -p 21 -svd_nsv 4 -svd_ncv 9 -log_view

The log_view events are listed at the end of this email. The first one 
(PetscBarrier) is wrong, because PetscBarrier is never called, if I place a 
breakpoint in PetscBarrier() it will never be hit. Also, in that event it 
reports some nonzero Mflop/s, which suggests that it corresponds to another 
event (not PetscBarrier). Furthermore, the count of the PetscBarrier event 
always matches the count of MatMultTranspose, so there must be a connection.

Does anyone have suggestions how to address this?

Note: this is with 1 MPI process.
Note: the solver creates a shell matrix with MATOP_MULT_TRANSPOSE.

Thanks.
Jose


PetscBarrier  16 1.0 7.5579e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 6
MatMult   82 1.0 6.1590e-01 1.0 9.25e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 81 97  0  0  0  81 97  0  0  0 2
MatMultTranspose  16 1.0 7.4625e-04 1.0 4.38e+03 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 6
MatAssemblyBegin   4 1.0 5.2452e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
MatAssemblyEnd 4 1.0 2.8920e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
MatTranspose   2 1.0 1.6265e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
SVDSetUp   1 1.0 1.8686e-02 1.0 2.72e+02 1.0 0.0e+00 0.0e+00 
0.0e+00  2  0  0  0  0   2  0  0  0  0 0
SVDSolve   1 1.0 5.5965e-01 1.0 7.95e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 74 83  0  0  0  74 83  0  0  0 1
EPSSetUp   1 1.0 9.5146e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  1  0  0  0  0   1  0  0  0  0 0
EPSSolve   1 1.0 5.4082e-01 1.0 7.94e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 71 83  0  0  0  71 83  0  0  0 1
STSetUp1 1.0 4.8406e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  1  0  0  0  0   1  0  0  0  0 0
STComputeOperatr   1 1.0 1.2653e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
STApply   24 1.0 6.1569e-01 1.0 8.84e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 81 92  0  0  0  81 92  0  0  0 1
STMatSolve24 1.0 6.1556e-01 1.0 8.80e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 81 92  0  0  0  81 92  0  0  0 1
KSPSetUp   1 1.0 2.8465e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
KSPSolve  24 1.0 6.1551e-01 1.0 8.80e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 81 92  0  0  0  81 92  0  0  0 1
KSPGMRESOrthog   480 1.0 4.3219e-01 1.0 3.98e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 57 42  0  0  0  57 42  0  0  0 1
PCSetUp1 1.0 3.8147e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
PCApply  504 1.0 2.1615e-02 1.0 1.01e+04 1.0 0.0e+00 0.0e+00 
0.0e+00  3  1  0  0  0   3  1  0  0  0 0
BVCopy27 1.0 3.0560e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
BVMultVec 38 1.0 3.7060e-03 1.0 8.88e+03 1.0 0.0e+00 0.0e+00 
0.0e+00  0  1  0  0  0   0  1  0  0  0 2
BVMultInPlace  3 1.0 1.0681e-04 1.0 4.32e+03 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  040
BVDotVec  38 1.0 4.5121e-03 1.0 4.38e+04 1.0 0.0e+00 0.0e+00 
0.0e+00  1  5  0  0  0   1  5  0  0  010
BVOrthogonalizeV  20 1.0 1.3182e-02 1.0 5.33e+04 1.0 0.0e+00 0.0e+00 
0.0e+00  2  6  0  0  0   2  6  0  0  0 4
BVScale   24 1.0 4.8089e-04 1.0 4.80e+02 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 1
BVNormVec  4 1.0 5.4097e-04 1.0 3.67e+03 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 7
BVNormalize1 1.0 1.3154e-03 1.0 3.75e+03 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 3
BVSetRandom1 1.0 9.1791e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
BVMatMultVec  19 1.0 4.6828e-01 1.0 6.99e+05 1.0 0.0e+00 0.0e+00 
0.0e+00 62 73  0  0  0  62 73  0  0  0 1
DSSolve3 1.0 8.8906e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
DSVectors  7 1.0 2.8920e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
DSOther   12 1.0 3.8576e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 0
VecDot 4 1.0 8.7500e-05 1.0 1.56e+02 1.0 0.0e+00 0.0e+00 
0.0e+00  0  0  0  0  0   0  0  0  0  0 2
VecMDot  488 1.0 6.0463e-04 1.0 1.98e+05 1.0 0.0e+00 0.0e+00 
0.0e+00  0 21  0  0  0   0 21  0  0  0   327
VecNorm  520 1.0 1.0412e-02 1.0 2.02e+04 1.0 0.0e+00 

Re: [petsc-dev] Fortran-auto-interfaces

2023-01-10 Thread Jose E. Roman
After I add the definition of tTao, if I regenerate the fortran stubs I get 
this in petsctao.h90:

  subroutine TaoGetLMVMMatrix(a,b,z)
   import tMat,tTao
   ...

instead of 
  subroutine TaoGetLMVMMatrix(a,b,z)
   import tMat




> El 10 ene 2023, a las 20:40, Blaise Bourdin  escribió:
> 
> Hi Jose,
> 
> I have created the type tTAO and PETSC_NULL_TAO what I need to figure out is 
> how to get bfort to import tTAO in each auto interface, for instance
> 
> Blaise
> 
> 
> 
>> On Jan 10, 2023, at 12:23 PM, Jose E. Roman  wrote:
>> 
>> The files under ftn-auto-interfaces are generated with bfort when you run 
>> configure. You can also  force its generation with 'make allfortranstubs'.
>> 
>> In the case of Tao I think the problem is that the definition of tTao is 
>> missing. You should have something like this in src/tao/f90-mod/petsctao.h:
>> 
>>  type tTao
>>PetscFortranAddr:: v PETSC_FORTRAN_TYPE_INITIALIZE
>>  end type tTao
>> 
>> 
>> Jose
>> 
>> 
>>> El 10 ene 2023, a las 17:22, Blaise Bourdin  escribió:
>>> 
>>> Hi,
>>> 
>>> I am trying to bring TAO fortran interfaces up to par with SNES. How is 
>>> tao/f90-mod/ftn-auto-interfaces/petsctao.h90 generated? I would need to 
>>> import tTAO and replace the call to the “Tao” macro with TAO.
>>> 
>>> Regards,
>>> Blaise
>>> 
>>> — 
>>> Canada Research Chair in Mathematical and Computational Aspects of Solid 
>>> Mechanics (Tier 1)
>>> Professor, Department of Mathematics & Statistics
>>> Hamilton Hall room 409A, McMaster University
>>> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
>>> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
>>> 
>> 
> 
> — 
> Canada Research Chair in Mathematical and Computational Aspects of Solid 
> Mechanics (Tier 1)
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 



Re: [petsc-dev] Fortran-auto-interfaces

2023-01-10 Thread Jose E. Roman
The files under ftn-auto-interfaces are generated with bfort when you run 
configure. You can also  force its generation with 'make allfortranstubs'.

In the case of Tao I think the problem is that the definition of tTao is 
missing. You should have something like this in src/tao/f90-mod/petsctao.h:

  type tTao
PetscFortranAddr:: v PETSC_FORTRAN_TYPE_INITIALIZE
  end type tTao


Jose
 

> El 10 ene 2023, a las 17:22, Blaise Bourdin  escribió:
> 
> Hi,
> 
> I am trying to bring TAO fortran interfaces up to par with SNES. How is 
> tao/f90-mod/ftn-auto-interfaces/petsctao.h90 generated? I would need to 
> import tTAO and replace the call to the “Tao” macro with TAO.
> 
> Regards,
> Blaise
> 
> — 
> Canada Research Chair in Mathematical and Computational Aspects of Solid 
> Mechanics (Tier 1)
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 



Re: [petsc-dev] Jacobi (smoothing) not staying on GPU

2022-06-08 Thread Jose E. Roman
Add an implementation of MatGetDiagonal_SeqAIJCUSPARSE(), which is missing. Use 
for example this: 
https://stackoverflow.com/questions/60311408/how-to-get-the-diagonal-of-a-sparse-matrix-in-cusparse

Jose

> El 8 jun 2022, a las 3:21, Mark Adams  escribió:
> 
> I am looking at TS/SNES/KSP/GAMG solve with Landau, which is all on the GPU, 
> but it looks like MatGetDiagonal (see attached), and to a lesser extent 
> VecPointWiseMult (biggest red band on the right side under PCApply), are 
> resulting in expensive CPU-GPU movement. MatGetDiagonal on the fine grid is 
> taking about 10x the time of TFQMR/GAMG iteration.
> 
> Attached is a view of this with CUDA and an nsys data file with Kokkos that 
> is pretty much the same.
> 
> Any thoughts on how to fix this?
> 
> Thanks,
> Mark
> 



Re: [petsc-dev] odd log behavior

2022-04-26 Thread Jose E. Roman
You have to add -log_view_gpu_time
See https://gitlab.com/petsc/petsc/-/merge_requests/5056

Jose


> El 26 abr 2022, a las 16:39, Mark Adams  escribió:
> 
> I'm seeing this on Perlmutter with Kokkos-CUDA. Nans in most log timing data 
> except the two 'Solve' lines.
> Just cg/jacobi on snes/ex56.
> 
> Any ideas?
>  
> VecTDot2 1.0   nan nan 1.20e+01 1.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   0  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00 100
> VecNorm2 1.0   nan nan 1.00e+01 1.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   0  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00 100
> VecCopy2 1.0   nan nan 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   0  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00  0
> VecSet 5 1.0   nan nan 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   0  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00  0
> VecAXPY4 1.0   nan nan 2.40e+01 1.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   1  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00 100
> VecPointwiseMult   1 1.0   nan nan 3.00e+00 1.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   0  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00 100
> KSPSetUp   1 1.0   nan nan 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  
> 0  0  0  0  0   0  0  0  0  0  -nan-nan  0 0.00e+000 0.00e+00  0
> KSPSolve   1 1.0 4.0514e-04 1.0 5.50e+01 1.0 0.0e+00 0.0e+00 
> 0.0e+00  1  0  0  0  0   2  0  0  0  0 0-nan  0 0.00e+000 
> 0.00e+00 100
> SNESSolve  1 1.0 2.2128e-02 1.0 5.55e+05 1.0 0.0e+00 0.0e+00 
> 0.0e+00 72 56  0  0  0 100100  0  0  025-nan  0 0.00e+000 
> 0.00e+00  0



Re: [petsc-dev] ftn-auto in $PETSC_DIR/include ?

2022-01-27 Thread Jose E. Roman
That is because PetscLogFlops() has an auto fortran stub. This is a 
PETSC_STATIC_INLINE function in include/petsclog.h

Jose


> El 27 ene 2022, a las 13:15, Stefano Zampini  
> escribió:
> 
> Just noticed this. Is it normal to have a ftn-auto directory generated by 
> bfort in $PETSC_DIR/include?
> 
> (ecrcml-user) [szampini@localhost petsc]$ ls include/ftn-auto
> makefile  petscloghf.c
> 
> 
> -- 
> Stefano



Re: [petsc-dev] Help tracking down unexpected Fortran behavior

2021-12-06 Thread Jose E. Roman
PCSetType() has an interface in src/ksp/f90-mod/petscpc.h90 while 
PCFactorSetMatOrderingType() does not.

I don't know if there is a clear criterion for when to add an interface in the 
corresponding h90 file. One criterion is that we need a F90 interface in case 
one of the arguments is allowed to be NULL.

Jose


> El 6 dic 2021, a las 11:20, Patrick Sanan  escribió:
> 
> I ran into an unexpected seg fault, which took me too long to realize was 
> because of the old-school "you forgot the ierr" mistake! I was expecting the 
> compiler to complain, since we've had better checking for a while. E.g. as in 
> the attached code to reproduce, my compiler indeed errors on this
> 
> call PCSetType(pc, PCLU)
> 
> but not this
> 
>call PCFactorSetMatOrderingType(pc, MATORDERINGEXTERNAL)
> 
> I'm not yet seeing what the difference is, but there is still plenty I don't 
> understand about how the custom fortran interfaces work. E.g. both of those 
> functions have custom interfaces in ftn-custom directories, accepting an 
> additional "len" argument to be used with FIXCHAR(), but I'm  not sure how 
> that argument is ultimately populated.
> 
> 
> 



Re: [petsc-dev] petsc4py build

2021-09-30 Thread Jose E. Roman
./configure --with-petsc4py=1

and then run a test with export PYTHONPATH=$PETSC_DIR/$PETSC_ARCH/lib

Jose


> El 30 sept 2021, a las 14:05, Matthew Knepley  escribió:
> 
> If I add binding code as part of my MR, how do I check the build on my 
> machine? I have gotten confused since we merged the source tree.
> 
>   Thanks,
> 
>  Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-dev] builds of PETSc based packages should not get their own secret files

2021-01-04 Thread Jose E. Roman
slepc4py source is not yet included in SLEPc source, but this will be done soon.
Jose


> El 5 ene 2021, a las 8:30, Barry Smith  escribió:
> 
> 
>   Thanks.
> 
>I'd like to move to also building the python bindings for both PETSc and 
> SLEPc by default in the future.  Gives better testing and makes it easier for 
> users who shouldn't need to worry about adding obscure options to get the 
> python bindings built. Users should be able to turn off python bindings as 
> opposed to having to turn them on.
> 
>Barry
> 
> 
>> On Jan 5, 2021, at 1:05 AM, Pierre Jolivet  wrote:
>> 
>> 
>> 
>>> On 5 Jan 2021, at 1:33 AM, Barry Smith  wrote:
>>> 
>>> 
>>>  For packages that are built after PETSc configure (and or install is done) 
>>> slepc, hpddm etc we've traditional saved the output  in its own file 
>>> stashed away somewhere. 
>>> 
>>>  For the CI this is driving me nuts because when they fail the output is 
>>> essentially "lost" and thus it is impossible to determine what has gone 
>>> wrong. 
>>> 
>>>  I have started to directly output in the same stream as the PETSc compiles 
>>> to make debugging much easier. Generally the packages are relatively small 
>>> and don't have a huge amount of output when compiling correctly.  I did it 
>>> for PETSc4py and SLEPc (slepc4py is a mystery yet how it get's hidden in 
>>> slepc). 
>> 
>> I guess we could change the redirect rule here 
>> https://gitlab.com/slepc/slepc/-/blob/master/config/packages/slepc4py.py#L53?
>> But we’d need to check whether slepc4py is built with --download-slepc 
>> --download-slepc-configure-arguments="--download-slepc4py” (inside PETSc) or 
>> simply --download-slepc4py (inside SLEPc).
>> 
>> I’m in favour of having a single file because it can be quite nightmarish to 
>> ask users for multiple .log files hidden in different folders, but I can 
>> understand if we stick with the current approach as well.
>> 
>> Thanks,
>> Pierre
>> 
>>>  Are there any large downsides to this plan?
>>> 
>>>  Barry
> 



Re: [petsc-dev] checkbadSource issue

2020-12-31 Thread Jose E. Roman
I am getting the same in some machines. I don't think this is due to a recent 
change.

In the pipeline, it is run as
$ make checkbadSource SHELL=bash
which solves the issue.


On the other hand, in the 'checksource' job in the pipelines, there are errors 
(probably not important):
gmakefile.test:100: arch-linux-c-debug/tests/testfiles: No such file or 
directory
gmakefile:67: arch-linux-c-debug/lib/petsc/conf/files: No such file or directory

Jose

> El 31 dic 2020, a las 9:30, Stefano Zampini  
> escribió:
> 
> Just rebased my MR over master
> 
> zampins@vulture:~/Devel/petsc$ make checkbadSource
> /bin/sh: 1: let: not found
> /bin/sh: 2: [: -gt: unexpected operator
> /bin/sh: 6: test: Illegal number: !
> /home/zampins/Devel/petsc/lib/petsc/conf/rules:660: recipe for target 
> 'checkbadSource' failed
> make[1]: *** [checkbadSource] Error 2
> GNUmakefile:17: recipe for target 'checkbadSource' failed
> make: *** [checkbadSource] Error 2
> 
> -- 
> Stefano



Re: [petsc-dev] MATOP_MAT_MULT

2020-05-14 Thread Jose E. Roman
I think this will be useful for SLEPc as it is now, but cannot test it because 
some changes are required in SLEPc. I will try to find time to implement them 
in the coming days.
Jose


> El 12 may 2020, a las 17:28, Pierre Jolivet  
> escribió:
> 
> MatShellSetMatProductOperation looks really nice to me, thanks!
> Pierre
> 
>> On 12 May 2020, at 12:13 PM, Stefano Zampini  
>> wrote:
>> 
>> Pierre and Jose
>> 
>> I have added support for MatMat callbacks operations for MATSHELL, you may 
>> want to take a look here for how to use it 
>> https://gitlab.com/petsc/petsc/-/merge_requests/2712/diffs?commit_id=7f809910e2bafe055242a87d70afd114664ffaf8
>> This is the relevant commit 
>> https://gitlab.com/petsc/petsc/-/merge_requests/2712/diffs?commit_id=e01f573d2ca0ec07c54db508a94a042fba4038de
>> 
>> Let me know if you need more customization (e.g. attach data to the product 
>> in a more systematic way) or if it can already fit your frameworks.
>> 
>> Best
>> Stefano
>> 
>> Il giorno dom 10 mag 2020 alle ore 21:04 Stefano Zampini 
>>  ha scritto:
>> 
>> 
>>> On May 10, 2020, at 8:56 PM, Jose E. Roman  wrote:
>>> 
>>> 
>>> 
>>>> El 10 may 2020, a las 19:42, Stefano Zampini  
>>>> escribió:
>>>> 
>>>> Glad to hear it works. Anyway, without the MatShellVecSetType call the 
>>>> code was erroring for me, not leaking memory.
>>>> Where you also providing -vec_type cuda at command line or what? 
>>>> Mark recently noted a similar leak, and I was wondering what was the cause 
>>>> for yours. A MWE would be great.
>>> 
>>> I never use -vec_type cuda because all SLEPc tests that use CUDA create 
>>> vectors with MatCreateVecs(). Until now I had never tested examples with 
>>> shell matrices.
>>> 
>>> This kind of issues would be more easy to detect if macros such as 
>>> PetscCheckSameTypeAndComm() would actually compare type_name.
>> 
>> VECCUDA specific code checks for type names (either VECSEQCUDA or 
>> VECMPICUDA) and these errored in my case. (Try by simply removing 
>> MatShellSetVecType)
>> 
>> zampins@jasmine:~/petsc/src/mat/tests$ git diff
>> diff --git a/src/mat/tests/ex69.c b/src/mat/tests/ex69.c
>> index b04652d..2a9374d 100644
>> --- a/src/mat/tests/ex69.c
>> +++ b/src/mat/tests/ex69.c
>> @@ -89,7 +89,7 @@ int main(int argc,char **argv)
>>if (use_shell) {
>>  ierr = 
>> MatCreateShell(PetscObjectComm((PetscObject)v),nloc,nloc,n,n,A,);CHKERRQ(ierr);
>>  ierr = 
>> MatShellSetOperation(S,MATOP_MULT,(void(*)(void))MatMult_S);CHKERRQ(ierr);
>> -ierr = MatShellSetVecType(S,vtype);CHKERRQ(ierr);
>> +//ierr = MatShellSetVecType(S,vtype);CHKERRQ(ierr);
>>  /* we could have called the general convertor also */
>>  /* ierr = MatConvert(A,MATSHELL,MAT_INITIAL_MATRIX,);CHKERRQ(ierr); */
>>} else {
>> zampins@jasmine:~/petsc/src/mat/tests$ ./ex69 -use_shell
>> [0]PETSC ERROR: - Error Message 
>> --
>> [0]PETSC ERROR: Invalid argument
>> [0]PETSC ERROR: Object (seq) is not seqcuda or mpicuda
>> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for 
>> trouble shooting.
>> [0]PETSC ERROR: Petsc Development GIT revision: v3.13-213-g441560f  GIT 
>> Date: 2020-04-25 17:01:13 +0300
>> [0]PETSC ERROR: ./ex69 on a arch-gpu-double-openmp-openblas named jasmine by 
>> zampins Sun May 10 21:03:09 2020
>> [0]PETSC ERROR: Configure options --download-cub=1 
>> --download-hara-commit=HEAD --download-hara=1 --download-kblas=1 
>> --download-magma=1 --download-openblas-use-pthreads=1 --download-openblas=1 
>> --download-superlu_dist-commit=HEAD --download-superlu_dist=1 
>> --download-mumps=1 --download-scalapack=1 
>> --with-cc=/opt/ecrc/intel/2018/compilers_and_libraries/linux/mpi/intel64/bin/mpicc
>>  --with-cuda-gencodearch=37 --with-cuda=1 
>> --with-cudac=/opt/ecrc/cuda/10.1/bin/nvcc --with-cxx-dialect=C++11 
>> --with-cxx=/opt/ecrc/intel/2018/compilers_and_libraries/linux/mpi/intel64/bin/mpicxx
>>  
>> --with-fc=/opt/ecrc/intel/2018/compilers_and_libraries/linux/mpi/intel64/bin/mpif90
>>  --with-fortran-bindings=0 --with-magma-fortran-bindings=0 
>> --with-opencl-include=/opt/ecrc/cuda/10.1/include 
>> --with-opencl-lib="-L/opt/ecrc/cuda/10.1/lib64 -lOpenCL" --with-openmp=1 
>> --with-precision=double PETSC_ARCH=arch-gpu-double-openmp-openblas
>> [0]PETSC ERROR: #1 Ve

Re: [petsc-dev] MATOP_MAT_MULT

2020-05-10 Thread Jose E. Roman



> El 10 may 2020, a las 19:42, Stefano Zampini  
> escribió:
> 
> Glad to hear it works. Anyway, without the MatShellVecSetType call the code 
> was erroring for me, not leaking memory.
> Where you also providing -vec_type cuda at command line or what? 
> Mark recently noted a similar leak, and I was wondering what was the cause 
> for yours. A MWE would be great.

I never use -vec_type cuda because all SLEPc tests that use CUDA create vectors 
with MatCreateVecs(). Until now I had never tested examples with shell matrices.

This kind of issues would be more easy to detect if macros such as 
PetscCheckSameTypeAndComm() would actually compare type_name. 

> 
> BTW, the branch also provide MatDenseReplaceArray() now.

Yes, I saw it. Thanks.

> Last thing I want to do is to support the user to provide MatMat (AB and AtB 
> at least) callbacks for MATSHELL 
> Can SLEPc benefit from such a feature ?

Some solvers yes.

> 
>> On May 10, 2020, at 7:47 PM, Jose E. Roman  wrote:
>> 
>> Thanks for the hints. I have modified my branch. I was missing the 
>> MatShellSetVecType() call. Now everything works fine and all tests are clean.
>> 
>> Jose
>> 
>> 
>>> El 9 may 2020, a las 21:32, Stefano Zampini  
>>> escribió:
>>> 
>>> Jose
>>> 
>>> I have just pushed an updated example with the MatMat operation, and I do 
>>> not see the memory leak. Can you check?
>>> 
>>> zampins@jasmine:~/petsc$ make -f gmakefile.test test search='mat%' 
>>> searchin='ex69' PETSC_OPTIONS='-malloc -malloc_dump -malloc_debug' 
>>> /usr/bin/python /home/zampins/petsc/config/gmakegentest.py 
>>> --petsc-dir=/home/zampins/petsc 
>>> --petsc-arch=arch-gpu-double-openmp-openblas 
>>> --testdir=./arch-gpu-double-openmp-openblas/tests
>>> Using MAKEFLAGS: -- PETSC_OPTIONS=-malloc -malloc_dump -malloc_debug 
>>> searchin=ex69 search=mat%
>>> CC arch-gpu-double-openmp-openblas/tests/mat/tests/ex69.o
>>>CLINKER arch-gpu-double-openmp-openblas/tests/mat/tests/ex69
>>>   TEST 
>>> arch-gpu-double-openmp-openblas/tests/counts/mat_tests-ex69_1.counts
>>> ok mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-0
>>> ok mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-1
>>> ok mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-0
>>> ok mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-1
>>> ok mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-0
>>> ok mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-1
>>> ok mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-0
>>> ok mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-1
>>> ok mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-0
>>> ok mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-1
>>> ok mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-0
>>> ok mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-1
>>> ok mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-0
>>> ok mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-1
>>> ok mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-0
>>> ok mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-1
>>> ok mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-0
>>> ok mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-1
>>> ok mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-0
>>> ok diff-mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-0
>>> ok mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-1
>>> ok diff-mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-1
>>> ok mat_tests-ex69_1+nsize-2test-2_l-0_u

Re: [petsc-dev] MATOP_MAT_MULT

2020-05-10 Thread Jose E. Roman
Thanks for the hints. I have modified my branch. I was missing the 
MatShellSetVecType() call. Now everything works fine and all tests are clean.

Jose


> El 9 may 2020, a las 21:32, Stefano Zampini  
> escribió:
> 
> Jose
> 
> I have just pushed an updated example with the MatMat operation, and I do not 
> see the memory leak. Can you check?
> 
> zampins@jasmine:~/petsc$ make -f gmakefile.test test search='mat%' 
> searchin='ex69' PETSC_OPTIONS='-malloc -malloc_dump -malloc_debug' 
> /usr/bin/python /home/zampins/petsc/config/gmakegentest.py 
> --petsc-dir=/home/zampins/petsc --petsc-arch=arch-gpu-double-openmp-openblas 
> --testdir=./arch-gpu-double-openmp-openblas/tests
> Using MAKEFLAGS: -- PETSC_OPTIONS=-malloc -malloc_dump -malloc_debug 
> searchin=ex69 search=mat%
>   CC arch-gpu-double-openmp-openblas/tests/mat/tests/ex69.o
>  CLINKER arch-gpu-double-openmp-openblas/tests/mat/tests/ex69
> TEST 
> arch-gpu-double-openmp-openblas/tests/counts/mat_tests-ex69_1.counts
>  ok mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-0
>  ok mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-1test-0_l-0_use_shell-1
>  ok mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-0
>  ok mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-1test-0_l-5_use_shell-1
>  ok mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-0
>  ok mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-1test-1_l-0_use_shell-1
>  ok mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-0
>  ok mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-1test-1_l-5_use_shell-1
>  ok mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-0
>  ok mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-1test-2_l-0_use_shell-1
>  ok mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-0
>  ok mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-1test-2_l-5_use_shell-1
>  ok mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-0
>  ok mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-2test-0_l-0_use_shell-1
>  ok mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-0
>  ok mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-2test-0_l-5_use_shell-1
>  ok mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-0
>  ok mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-2test-1_l-0_use_shell-1
>  ok mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-0
>  ok mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-2test-1_l-5_use_shell-1
>  ok mat_tests-ex69_1+nsize-2test-2_l-0_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-2test-2_l-0_use_shell-0
>  ok mat_tests-ex69_1+nsize-2test-2_l-0_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-2test-2_l-0_use_shell-1
>  ok mat_tests-ex69_1+nsize-2test-2_l-5_use_shell-0
>  ok diff-mat_tests-ex69_1+nsize-2test-2_l-5_use_shell-0
>  ok mat_tests-ex69_1+nsize-2test-2_l-5_use_shell-1
>  ok diff-mat_tests-ex69_1+nsize-2test-2_l-5_use_shell-1
> 
> # -
> #   Summary
> # -
> # success 48/48 tests (100.0%)
> # failed 0/48 tests (0.0%)
> # todo 0/48 tests (0.0%)
> # skip 0/48 tests (0.0%)
> #
> # Wall clock time for tests: 58 sec
> # Approximate CPU time (not incl. build time): 62.11 sec
> #
> # Timing summary (actual test time / total CPU time): 
> #   mat_tests-ex69_1: 2.30 sec / 62.11 sec
> 
>> On May 9, 2020, at 9:28 PM, Jose E. Roman  wrote:
>> 
>> 
>> 
>>> El 9 may 2020, a las 20:00, Stefano Zampini  
>>> escribió:
>>> 
>>> 
>>> 
>>> Il giorno sab 9 mag 2020 alle ore 19:43 Jose E. Roman  
>>> ha scritto:
>>> 
>>> 
>>>> El 9 may 2020, a las 12:45, Stefano Zampini  
>>>> escribió:
>>>> 
>>>> Jose
>>>> 
>>>> I have just pushed a test 
>>>> https://gitlab.com/petsc/petsc/-/blob/d64c2bc63c8d5d1a8c689f1abc762ae2722bba26/src/mat/tests/ex69.c
>>>> See if it fits your framework, and f

Re: [petsc-dev] MATOP_MAT_MULT

2020-05-09 Thread Jose E. Roman


> El 9 may 2020, a las 20:00, Stefano Zampini  
> escribió:
> 
> 
> 
> Il giorno sab 9 mag 2020 alle ore 19:43 Jose E. Roman  ha 
> scritto:
> 
> 
> > El 9 may 2020, a las 12:45, Stefano Zampini  
> > escribió:
> > 
> > Jose
> > 
> > I have just pushed a test 
> > https://gitlab.com/petsc/petsc/-/blob/d64c2bc63c8d5d1a8c689f1abc762ae2722bba26/src/mat/tests/ex69.c
> > See if it fits your framework, and feel free to modify the test to add more 
> > checks
> 
> Almost good. The following modification of the example fails with -test 1:
> 
> 
> diff --git a/src/mat/tests/ex69.c b/src/mat/tests/ex69.c
> index e562f1e2e3..2df2c89be1 100644
> --- a/src/mat/tests/ex69.c
> +++ b/src/mat/tests/ex69.c
> @@ -84,6 +84,10 @@ int main(int argc,char **argv)
>}
>ierr = VecCUDARestoreArray(v,);CHKERRQ(ierr);
> 
> +  if (test==1) {
> +ierr = MatDenseCUDAGetArray(B,);CHKERRQ(ierr);
> +if (aa) SETERRQ(PETSC_COMM_WORLD,PETSC_ERR_USER,"Expected a null 
> pointer");
> +  }
> 
>/* free work space */
>ierr = MatDestroy();CHKERRQ(ierr);
> 
> 
> 
> I would expect that after MatDenseCUDAResetArray() the pointer is NULL 
> because it was set so in line 60. In the CPU counterpart it works as expected.
> 
> Pushed a fix for this, thanks.
>  
> Another comment is: in line 60 you have changed MatDenseCUDAPlaceArray() to 
> MatDenseCUDAReplaceArray(). This is ok, but it is strange because 
> MatDenseReplaceArray() does not exist. So the interface is different in GPU 
> vs CPU, but I guess it is necessary here.
> 
> I think we do not support calling PlaceArray twice anywhere PETSc. This is 
> why I have added MatDenseCUDAReplaceArray(). If you need support for the CPU 
> case too, I can add it.

Yes, please. It is better to have the same thing in both cases.

I am attaching the modified example, now performs a mat-mat product. If I do 
A*B it works well, but if I replace A with a shell matrix I get a memory leak.

[ 0]32 bytes VecCUDAAllocateCheck() line 34 in 
/home/users/proy/copa/jroman/soft/petsc/src/vec/vec/impls/seq/seqcuda/veccuda2.cu
[ 0]32 bytes VecCUDAAllocateCheck() line 34 in 
/home/users/proy/copa/jroman/soft/petsc/src/vec/vec/impls/seq/seqcuda/veccuda2.cu



>  
> Thanks.
> Jose
> 
> 
> > 
> > 
> > Il giorno ven 8 mag 2020 alle ore 18:48 Jose E. Roman  
> > ha scritto:
> > Attached. Run with -test 1 or -test 2
> > 
> > > El 8 may 2020, a las 17:14, Stefano Zampini  
> > > escribió:
> > > 
> > > Jose
> > > 
> > > Just send me a MWE and I’ll fix the case for you
> > > 
> > > Thanks
> > > Stefano
> > 
> > 
> > -- 
> > Stefano
> 
> 
> 
> -- 
> Stefano


ex69.c
Description: Binary data


Re: [petsc-dev] MATOP_MAT_MULT

2020-05-09 Thread Jose E. Roman



> El 9 may 2020, a las 12:45, Stefano Zampini  
> escribió:
> 
> Jose
> 
> I have just pushed a test 
> https://gitlab.com/petsc/petsc/-/blob/d64c2bc63c8d5d1a8c689f1abc762ae2722bba26/src/mat/tests/ex69.c
> See if it fits your framework, and feel free to modify the test to add more 
> checks

Almost good. The following modification of the example fails with -test 1:


diff --git a/src/mat/tests/ex69.c b/src/mat/tests/ex69.c
index e562f1e2e3..2df2c89be1 100644
--- a/src/mat/tests/ex69.c
+++ b/src/mat/tests/ex69.c
@@ -84,6 +84,10 @@ int main(int argc,char **argv)
   }
   ierr = VecCUDARestoreArray(v,);CHKERRQ(ierr);
 
+  if (test==1) {
+ierr = MatDenseCUDAGetArray(B,);CHKERRQ(ierr);
+if (aa) SETERRQ(PETSC_COMM_WORLD,PETSC_ERR_USER,"Expected a null pointer");
+  }
 
   /* free work space */
   ierr = MatDestroy();CHKERRQ(ierr);



I would expect that after MatDenseCUDAResetArray() the pointer is NULL because 
it was set so in line 60. In the CPU counterpart it works as expected.

Another comment is: in line 60 you have changed MatDenseCUDAPlaceArray() to 
MatDenseCUDAReplaceArray(). This is ok, but it is strange because 
MatDenseReplaceArray() does not exist. So the interface is different in GPU vs 
CPU, but I guess it is necessary here.

Thanks.
Jose


> 
> 
> Il giorno ven 8 mag 2020 alle ore 18:48 Jose E. Roman  ha 
> scritto:
> Attached. Run with -test 1 or -test 2
> 
> > El 8 may 2020, a las 17:14, Stefano Zampini  
> > escribió:
> > 
> > Jose
> > 
> > Just send me a MWE and I’ll fix the case for you
> > 
> > Thanks
> > Stefano
> 
> 
> -- 
> Stefano



Re: [petsc-dev] MATOP_MAT_MULT

2020-05-08 Thread Jose E. Roman
Attached. Run with -test 1 or -test 2

> El 8 may 2020, a las 17:14, Stefano Zampini  
> escribió:
> 
> Jose
> 
> Just send me a MWE and I’ll fix the case for you
> 
> Thanks
> Stefano


ex1.c
Description: Binary data


Re: [petsc-dev] MATOP_MAT_MULT

2020-05-08 Thread Jose E. Roman
re’s issues (they can correct me if I’m wrong)
> >>> 
> >>> However, we should definitely have a way for the user to enquire if a 
> >>> given operation is supported or not. 
> >>> 
> >>> Thanks
> >>> Stefano
> >>> 
> >>>> On May 6, 2020, at 12:03 AM, Zhang, Hong  wrote:
> >>>> 
> >>>> Stefano:
> >>>> Now, we need address this bug report: enable 
> >>>> MatHasOperation(C,MATOP_MAT_MULT,) for matrix products, e.g., C=A*B, 
> >>>> which is related to your issue 
> >>>> https://gitlab.com/petsc/petsc/-/issues/608.
> >>>> 
> >>>> In petsc-3.13:
> >>>> 1) MATOP_MAT_MULT, ..., MATOP_MATMAT_MULT are removed from the MATOP 
> >>>> table (they are still listed in petscmat.h -- an overlook, I'll remove 
> >>>> them). 
> >>>> MATOP_MAT_MULT_SYMBOLIC/NUMERIC ... are still in the table.
> >>>> 2) MatHasOperation(C,...) must be called for the matrix product C, not 
> >>>> matrix A or B (slepc needs to fix this after this reported bug is fixed).
> >>>> 
> >>>> Like MatSetOption(), MatHasOperation() must be called AFTER 
> >>>> MatSetType(). You moved MatSetType() from MatProductSetFromOptions() 
> >>>> back to MatProductSymbolic() in your latest patch, thus user has to call 
> >>>> MatHasOption() after MatProductSymbolic():
> >>>> 
> >>>> MatProductCreate(A,B,NULL,);
> >>>> MatProductSetType(C,...);
> >>>> ...
> >>>> MatProductSetFromOptions();   //if the product is not supported for the 
> >>>> given mat types, currently petsc crashes here, which we can replace with 
> >>>> an error output
> >>>> 
> >>>> MatProductSymbloc(); -> call MatSetType()
> >>>> MatHasOperation(C,MATOP_MAT_MULT,)
> >>>> 
> >>>> Question: how to call MatHasOperation(C,..) when MatProductSymbloc() is 
> >>>> not supported?
> >>>> 
> >>>> My fix to this bug:
> >>>> Resume MatSetType() in MatProductSetFromOptions(). Then user calls:
> >>>> 
> >>>> MatProductCreate(A,B,NULL,);
> >>>> MatProductSetType(C,...);
> >>>> ...
> >>>> MatProductSetFromOptions(C);  //if the product is not supported for the 
> >>>> given mat types, C->ops->productsymbolic=NULL;
> >>>> MatHasOperation(C,MATOP_PRODUCTSYMBOLIC,);
> >>>> if (flg) { 
> >>>>   MatProductSymbolic(C);
> >>>>   ...
> >>>> } else {
> >>>>   MatDestroy();
> >>>>   ...
> >>>> }
> >>>> 
> >>>> Either you take care of this bug report, or let me know your thoughts 
> >>>> about how to fix this bug.
> >>>> Hong
> >>>> From: Zhang, Hong 
> >>>> Sent: Saturday, April 25, 2020 2:40 PM
> >>>> To: Pierre Jolivet 
> >>>> Cc: Jose E. Roman ; Stefano Zampini 
> >>>> ; petsc-dev ; Smith, 
> >>>> Barry F. 
> >>>> Subject: Re: [petsc-dev] MATOP_MAT_MULT
> >>>> 
> >>>> Pierre,
> >>>> When we do 
> >>>> MatProductCreate: C = A*B; //C owns A and B, thus B->refct =2
> >>>> MatProductCreateWithMats: B = A*C; //If I let B own A and C, then 
> >>>> C->refct=2
> >>>> Then
> >>>> MatDestroy() and MatDestroy() only reduce their refct from 2 to 1, 
> >>>> thus memory leak. 
> >>>> My solution is adding 
> >>>> {
> >>>>   matreference;  /* do not add refct when using 
> >>>> MatProductCreateWithMat() to void recursive references */
> >>>> } Mat_Product 
> >>>> This flg prevents MatProductCreateWithMats() to increase reference 
> >>>> counts, i.e., B does not own A and C to avoid reverse ownership. I am 
> >>>> not sure this is a reasonable solution. Let me know if you have better 
> >>>> solution.
> >>>> See ex109.c and ex195.c for tests.
> >>>> Hong
> >>>> From: Pierre Jolivet 
> >>>> Sent: Saturday, April 25, 2020 11:45 AM
> >>>> To: Zhang, Hong 
> >>>> Cc: Jose E. Roman ; Stefano Zampini 
> >>>> ; petsc-dev ; Smith, 
> >>>> Barry F. 
> &g

Re: [petsc-dev] MATOP_MAT_MULT

2020-05-06 Thread Jose E. Roman
I tried Stefano's branch with SLEPc, in combination with this branch:
https://gitlab.com/slepc/slepc/-/compare/master...jose%2Fbv-matmult-fallback

It is working as expected. I tried sequential and parallel examples. All tests 
are clean in both real and complex scalars. I did no try with CUDA yet, because 
it requires an additional change in SLEPc. I will have a look tomorrow.

Jose


> El 6 may 2020, a las 20:00, Pierre Jolivet  
> escribió:
> 
> Stefano,
> Is this working for nsize > 1 
> https://gitlab.com/petsc/petsc/-/blob/7e88e4dd44e2a5120b858cf9f19502ac359985be/src/mat/tests/ex70.c#L295
> I am now getting (in another example):
> [0]PETSC ERROR: Call MatProductSymbolic() first
> Instead of the previous:
> [0]PETSC ERROR: MatProductSetFromOptions_AB for A mpisbaij and B mpidense is 
> not supported
> 
> (But my branch is lagging behind maint, so maybe I’m missing some other 
> fixes, take this with a grain of salt).
> Thanks,
> Pierre
> 
>> On 6 May 2020, at 4:52 PM, Stefano Zampini  wrote:
>> 
>> I have working support for MATSHELL here 
>> https://gitlab.com/petsc/petsc/-/commit/146e7f1ccf5f267b36079cac494077a23e8bbc45
>> Tested here 
>> https://gitlab.com/petsc/petsc/-/commit/c4fcaa45a01cc783c629913983b204a1cbcb3939
>> 
>> Jose and Pierre, this code is supposed to work with CUDA, but I haven't 
>> tested it yet
>> Can you tell me if this fixes the issues for you to not have to loop over 
>> the columns of the dense matrix yourself?
>> 
>> Il giorno mer 6 mag 2020 alle ore 10:09 Stefano Zampini 
>>  ha scritto:
>> Hong
>> 
>> If the product is not supported, the type of C will never be set anyway, so 
>> you cannot call MatHasOperation after MatProductSetFromOptions.
>> The purpose of MatProductSetFromOptions is to populate the function pointers 
>> for symbolic and numeric phases. If not found, they should be set to null 
>> instead of erroring as it is now.
>> What I propose is to have MatProductHasOperation (not MatHasOperation): this 
>> function will be identical to MatHasOperation, with the only difference that 
>> does not call PetscValidType on the input mat.
>> 
>> Meanwhile, I’m coding a basic MatMat (and MatTransposeMat) driver to loop 
>> over dense columns and apply MatMult. (Or MatMultTranspose) without memory 
>> movement.
>> This will be valid for all B matrices being of type dense (and its 
>> derivations), with C of type dense too. This in principle will fix Jose and 
>> Pierre’s issues (they can correct me if I’m wrong)
>> 
>> However, we should definitely have a way for the user to enquire if a given 
>> operation is supported or not. 
>> 
>> Thanks
>> Stefano
>> 
>>> On May 6, 2020, at 12:03 AM, Zhang, Hong  wrote:
>>> 
>>> Stefano:
>>> Now, we need address this bug report: enable 
>>> MatHasOperation(C,MATOP_MAT_MULT,) for matrix products, e.g., C=A*B, 
>>> which is related to your issue https://gitlab.com/petsc/petsc/-/issues/608.
>>> 
>>> In petsc-3.13:
>>> 1) MATOP_MAT_MULT, ..., MATOP_MATMAT_MULT are removed from the MATOP table 
>>> (they are still listed in petscmat.h -- an overlook, I'll remove them). 
>>> MATOP_MAT_MULT_SYMBOLIC/NUMERIC ... are still in the table.
>>> 2) MatHasOperation(C,...) must be called for the matrix product C, not 
>>> matrix A or B (slepc needs to fix this after this reported bug is fixed).
>>> 
>>> Like MatSetOption(), MatHasOperation() must be called AFTER MatSetType(). 
>>> You moved MatSetType() from MatProductSetFromOptions() back to 
>>> MatProductSymbolic() in your latest patch, thus user has to call 
>>> MatHasOption() after MatProductSymbolic():
>>> 
>>> MatProductCreate(A,B,NULL,);
>>> MatProductSetType(C,...);
>>> ...
>>> MatProductSetFromOptions();   //if the product is not supported for the 
>>> given mat types, currently petsc crashes here, which we can replace with an 
>>> error output
>>> 
>>> MatProductSymbloc(); -> call MatSetType()
>>> MatHasOperation(C,MATOP_MAT_MULT,)
>>> 
>>> Question: how to call MatHasOperation(C,..) when MatProductSymbloc() is not 
>>> supported?
>>> 
>>> My fix to this bug:
>>> Resume MatSetType() in MatProductSetFromOptions(). Then user calls:
>>> 
>>> MatProductCreate(A,B,NULL,);
>>> MatProductSetType(C,...);
>>> ...
>>> MatProductSetFromOptions(C);  //if the product is not supported for the 
>>> given mat types, C->ops->productsymbolic=NULL;
>>> MatH

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-23 Thread Jose E. Roman
I agree with Pierre. However, if the fix involves an API change then I could 
understand it goes to master.


> El 23 abr 2020, a las 7:43, Pierre Jolivet  
> escribió:
> 
> I don’t know if you really meant to ask for José's opinion here, but I 
> personally think that releasing all 3.13.X version with a broken MatMatMult 
> and no deprecation warning concerning MATOP_MAT_MULT is not the best.
> Thanks,
> Pierre
> 
>> On 23 Apr 2020, at 4:03 AM, Zhang, Hong  wrote:
>> 
>> Jose,
>> I'll check and fix them. I have to do it in master, is ok?
>> Hong
>> 
>> From: Pierre Jolivet 
>> Sent: Wednesday, April 22, 2020 3:08 PM
>> To: Zhang, Hong 
>> Cc: Jose E. Roman ; Stefano Zampini 
>> ; petsc-dev ; Smith, Barry 
>> F. 
>> Subject: Re: [petsc-dev] MATOP_MAT_MULT
>>  
>> Hong,
>> I also now just tested some previously PETSC_VERSION_LT(3,13,0) running code 
>> with C=A*B, Dense=Nest*Dense, all previously allocated prior to a call to 
>> MatMatMult and scall = MAT_REUSE_MATRIX.
>> Sadly, it’s now broken. It is my fault for not having a test for this in 
>> https://gitlab.com/petsc/petsc/-/merge_requests/2069, sorry about that.
>> [0]PETSC ERROR: Call MatProductSymbolic() first
>> [0]PETSC ERROR: #1 MatProductNumeric() line 730 in 
>> /ccc/work/cont003/rndm/rndm/petsc/src/mat/interface/matproduct.c
>> [0]PETSC ERROR: #2 MatMatMult() line 9335 in 
>> /ccc/work/cont003/rndm/rndm/petsc/src/mat/interface/matrix.c
>> 
>> Here is a reproducer (that will work OK with 3.12.4).
>> diff --git a/src/mat/tests/ex195.c b/src/mat/tests/ex195.c
>> index c72662bc3c..811de669c5 100644
>> --- a/src/mat/tests/ex195.c
>> +++ b/src/mat/tests/ex195.c
>> @@ -73,2 +73,3 @@ int main(int argc,char **args)
>>ierr = MatMatMult(nest,B,MAT_REUSE_MATRIX,PETSC_DEFAULT,);CHKERRQ(ierr);
>> +  ierr = MatMatMult(nest,C,MAT_REUSE_MATRIX,PETSC_DEFAULT,);CHKERRQ(ierr);
>>ierr = MatMatMultEqual(nest,B,C,10,);CHKERRQ(ierr);
>> 
>> $ make -f gmakefile test searchin=mat_tests-ex195
>> 
>> I believe this is very close to the topic at hand and issue #608, so maybe 
>> you could fix this as well in the same upcoming MR? Just let me know, I can 
>> have a crack it otherwise.
>> Thanks,
>> Pierre
>> 
>>> On 22 Apr 2020, at 5:38 PM, Zhang, Hong  wrote:
>>> 
>>> Jose, Pierre and Stefano,
>>> Now I understand the issue that Stefano raised. I plan to add
>>> MatProductIsSupported(Wmat,,)
>>> the flag 'supported' tells if the product is supported/implemented or not,
>>> and the function pointer 'matproductsetfromoptions' gives the name of 
>>> MatProductSetFromOptions_xxx, (including basic implementation) or NULL.
>>> 
>>> Let me know your suggestions. I'll list all of you as reviewer.
>>> Hong
>>> 
>>>   
>>> From: Jose E. Roman 
>>> Sent: Wednesday, April 22, 2020 9:07 AM
>>> To: Stefano Zampini 
>>> Cc: Zhang, Hong ; Pierre Jolivet 
>>> ; petsc-dev 
>>> Subject: Re: [petsc-dev] MATOP_MAT_MULT
>>>  
>>> I agree with Pierre and Stefano.
>>> Hong: your proposed solution would be fine, but MATOP_MATPRODUCT does not 
>>> exist yet, so I cannot try it.
>>> I would like a solution along the lines of what Stefano suggests. It is not 
>>> too much trouble if it goes to master instead of maint.
>>> 
>>> Thanks.
>>> Jose
>>> 
>>> 
>>> > El 22 abr 2020, a las 15:26, Stefano Zampini  
>>> > escribió:
>>> > 
>>> > 
>>> >> 
>>> >> MatProductCreateWithMat(A,Vmat,NULL,Wmat);
>>> >> MatProductSetType(Wmat,MATPRODUCT_AB);
>>> >> MatHasOperation(Wmat,MATOP_MATPRODUCT,); //new support, it calls 
>>> >> MatProductSetFromOptions(Wmat)
>>> > 
>>> > Hong, this would go in the direction I was outlining here 
>>> > https://gitlab.com/petsc/petsc/-/issues/608
>>> > How about also adding something like
>>> > 
>>> > MatProductIsImplemented(Wmat,)
>>> > 
>>> > That returns true if a specific implementation is available? This way
>>> > 
>>> > This way, if we use both queries, we can assess the presence of the basic 
>>> > fallbacks too, i.e.
>>> >  
>>> > MatHasOperation(Wmat,MATOP_MATPRODUCT,)
>>> > MatProductIsImplemented(Wmat,)
>>> > 
>>> > If flg1 is false, no support at all
>>> > If flg1 is 

Re: [petsc-dev] MATOP_MAT_MULT

2020-04-22 Thread Jose E. Roman
I agree with Pierre and Stefano.
Hong: your proposed solution would be fine, but MATOP_MATPRODUCT does not exist 
yet, so I cannot try it.
I would like a solution along the lines of what Stefano suggests. It is not too 
much trouble if it goes to master instead of maint.

Thanks.
Jose


> El 22 abr 2020, a las 15:26, Stefano Zampini  
> escribió:
> 
> 
>> 
>> MatProductCreateWithMat(A,Vmat,NULL,Wmat);
>> MatProductSetType(Wmat,MATPRODUCT_AB);
>> MatHasOperation(Wmat,MATOP_MATPRODUCT,); //new support, it calls 
>> MatProductSetFromOptions(Wmat)
> 
> Hong, this would go in the direction I was outlining here 
> https://gitlab.com/petsc/petsc/-/issues/608
> How about also adding something like
> 
> MatProductIsImplemented(Wmat,)
> 
> That returns true if a specific implementation is available? This way
> 
> This way, if we use both queries, we can assess the presence of the basic 
> fallbacks too, i.e.
>  
> MatHasOperation(Wmat,MATOP_MATPRODUCT,)
> MatProductIsImplemented(Wmat,)
> 
> If flg1 is false, no support at all
> If flg1 is true and flg2 is false -> Basic implementation (i.e, MatShell with 
> products inside)
> If flg1 and flg2 are both true -> Specific implementation available.
> 
>> if (V->vmm && flg) {
>>   MatProductSymbolic(Wmat);
>>   MatProductNumeric(Wmat);
>> } else {
>>   MatDestroy(Wmat);
>>   ...
>> }
>> Hong
>> 
>> 
>> From: Jose E. Roman 
>> Sent: Tuesday, April 21, 2020 11:21 AM
>> To: Pierre Jolivet 
>> Cc: Zhang, Hong ; petsc-dev 
>> Subject: Re: [petsc-dev] MATOP_MAT_MULT
>>  
>> 
>> 
>> > El 21 abr 2020, a las 17:53, Pierre Jolivet  
>> > escribió:
>> > 
>> > 
>> > 
>> >> On 21 Apr 2020, at 5:22 PM, Zhang, Hong  wrote:
>> >> 
>> >> Pierre,
>> >> MatMatMult_xxx() is removed from MatOps table.
>> > 
>> > Shouldn’t there be a deprecation notice somewhere?
>> > There is nothing about MATOP_MAT_MULT in the 3.13 changelog 
>> > https://www.mcs.anl.gov/petsc/documentation/changes/313.html
>> > For example, I see that in SLEPc, José is currently making these checks, 
>> > which are in practice useless as they always return 
>> > PETSC_FALSE?https://gitlab.com/slepc/slepc/-/blob/master/src/sys/classes/bv/impls/contiguous/contig.c#L191
>> > (Maybe José is aware of this and this is just for testing)
>> 
>> No, I was not aware of this. Thanks for bringing this up. Now in 3.13 we are 
>> always doing the slow version (column by column), so yes I am interested in 
>> a solution for this.
>> 
>> > 
>> >> MatMatMult() is replaced by
>> >> MatProductCreate()
>> >> MatProductSetType(,MATPRODUCT_AB)
>> >> MatProductSetFromOptions()
>> >> MatProductSymbolic()
>> >> MatProductNumeric()
>> >> 
>> >> Where/when do you need query a single matrix for its product operation?
>> > 
>> > I didn’t want to bother at first with the new API, because I’m only 
>> > interested in C = A*B with C and B being dense.
>> > Of course, I can update my code, but if I understand Stefano’s issue 
>> > correctly, and let’s say my A is of type SBAIJ, for which there is no 
>> > MatMatMult, the code will now error out in the MatProduct?
>> > There is no fallback mechanism? Meaning I could in fact _not_ use the new 
>> > API and will just have to loop on all columns of B, even for AIJ matrices.
>> > 
>> > Thanks,
>> > Pierre
>> > 
>> >> Hong
>> >> 
>> >> From: petsc-dev  on behalf of Pierre 
>> >> Jolivet 
>> >> Sent: Tuesday, April 21, 2020 7:50 AM
>> >> To: petsc-dev 
>> >> Subject: [petsc-dev] MATOP_MAT_MULT
>> >>  
>> >> Hello,
>> >> Am I seeing this correctly?
>> >> #include 
>> >> 
>> >> int main(int argc,char **args)
>> >> {
>> >>   Mat   A;
>> >>   PetscBool hasMatMult;
>> >>   PetscErrorCodeierr;
>> >> 
>> >>   ierr = PetscInitialize(,,NULL,NULL);if (ierr) return ierr;
>> >>   ierr = MatCreate(PETSC_COMM_WORLD,);CHKERRQ(ierr);
>> >>   ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);
>> >>   ierr = MatHasOperation(A,MATOP_MAT_MULT,);CHKERRQ(ierr);
>> >>   printf("%s\n", PetscBools[hasMatMult]);
>> >>   ierr = PetscFinalize();
>> >>   return ierr;
>> >> }
>> >> 
>> >> => FALSE
>> >> 
>> >> I believe this is a regression (or at least an undocumented change) 
>> >> introduced here: https://gitlab.com/petsc/petsc/-/merge_requests/2524/
>> >> I also believe Stefano raised a similar point there: 
>> >> https://gitlab.com/petsc/petsc/-/issues/608
>> >> This is a performance killer in my case because I was previously using 
>> >> this check to know whether I could use MatMatMult or had to loop on all 
>> >> columns and call MatMult on all of them.
>> >> There is also a bunch of (previously functioning but now) broken code, 
>> >> e.g., 
>> >> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/transpose/transm.c.html#line105
>> >>  or 
>> >> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/nest/matnest.c.html#line2105
>> >> Is this being addressed/documented?
>> >> 
>> >> Thanks,
>> >> Pierre
>> > 
> 



Re: [petsc-dev] MATOP_MAT_MULT

2020-04-21 Thread Jose E. Roman



> El 21 abr 2020, a las 17:53, Pierre Jolivet  
> escribió:
> 
> 
> 
>> On 21 Apr 2020, at 5:22 PM, Zhang, Hong  wrote:
>> 
>> Pierre,
>> MatMatMult_xxx() is removed from MatOps table.
> 
> Shouldn’t there be a deprecation notice somewhere?
> There is nothing about MATOP_MAT_MULT in the 3.13 changelog 
> https://www.mcs.anl.gov/petsc/documentation/changes/313.html
> For example, I see that in SLEPc, José is currently making these checks, 
> which are in practice useless as they always return PETSC_FALSE? 
> https://gitlab.com/slepc/slepc/-/blob/master/src/sys/classes/bv/impls/contiguous/contig.c#L191
> (Maybe José is aware of this and this is just for testing)

No, I was not aware of this. Thanks for bringing this up. Now in 3.13 we are 
always doing the slow version (column by column), so yes I am interested in a 
solution for this.

> 
>> MatMatMult() is replaced by
>> MatProductCreate()
>> MatProductSetType(,MATPRODUCT_AB)
>> MatProductSetFromOptions()
>> MatProductSymbolic()
>> MatProductNumeric()
>> 
>> Where/when do you need query a single matrix for its product operation?
> 
> I didn’t want to bother at first with the new API, because I’m only 
> interested in C = A*B with C and B being dense.
> Of course, I can update my code, but if I understand Stefano’s issue 
> correctly, and let’s say my A is of type SBAIJ, for which there is no 
> MatMatMult, the code will now error out in the MatProduct?
> There is no fallback mechanism? Meaning I could in fact _not_ use the new API 
> and will just have to loop on all columns of B, even for AIJ matrices.
> 
> Thanks,
> Pierre
> 
>> Hong
>> 
>> From: petsc-dev  on behalf of Pierre Jolivet 
>> 
>> Sent: Tuesday, April 21, 2020 7:50 AM
>> To: petsc-dev 
>> Subject: [petsc-dev] MATOP_MAT_MULT
>>  
>> Hello,
>> Am I seeing this correctly?
>> #include 
>> 
>> int main(int argc,char **args)
>> {
>>   Mat   A;
>>   PetscBool hasMatMult;
>>   PetscErrorCodeierr;
>> 
>>   ierr = PetscInitialize(,,NULL,NULL);if (ierr) return ierr;
>>   ierr = MatCreate(PETSC_COMM_WORLD,);CHKERRQ(ierr);
>>   ierr = MatSetType(A,MATMPIAIJ);CHKERRQ(ierr);
>>   ierr = MatHasOperation(A,MATOP_MAT_MULT,);CHKERRQ(ierr);
>>   printf("%s\n", PetscBools[hasMatMult]);
>>   ierr = PetscFinalize();
>>   return ierr;
>> }
>> 
>> => FALSE
>> 
>> I believe this is a regression (or at least an undocumented change) 
>> introduced here: https://gitlab.com/petsc/petsc/-/merge_requests/2524/
>> I also believe Stefano raised a similar point there: 
>> https://gitlab.com/petsc/petsc/-/issues/608
>> This is a performance killer in my case because I was previously using this 
>> check to know whether I could use MatMatMult or had to loop on all columns 
>> and call MatMult on all of them.
>> There is also a bunch of (previously functioning but now) broken code, e.g., 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/transpose/transm.c.html#line105
>>  or 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/nest/matnest.c.html#line2105
>> Is this being addressed/documented?
>> 
>> Thanks,
>> Pierre
> 



Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Jose E. Roman via petsc-dev



> El 22 sept 2019, a las 19:11, Smith, Barry F.  escribió:
> 
>   Jose,
> 
> Thanks for the pointer. 
> 
> Will this change dramatically affect the organization of SLEPc? As noted 
> in my previous email eventually we need to switch to a new API where the 
> REUSE with a different matrix is even more problematic.
> 
>  If you folks have use cases that fundamentally require reusing a 
> previous matrix instead of destroying and getting a new one created we will 
> need to think about additional features in the API that would allow this 
> reusing of an array. But it seems to me that destroying the old matrix and 
> using the initial call to create the matrix should be ok and just require 
> relatively minor changes to your codes?
> 
>  Barry

We use MatDensePlaceArray() to plug an array into matrix C before MatMatMult(). 
If we cannot do this, we will have to copy from the internal array of the 
result C to our array.

Would the following sequence work?
MatMatMultSymbolic()
MatDensePlaceArray()
MatMatMultNumeric()

Jose



Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Jose E. Roman via petsc-dev
The man page of MatMatMult says:
"In the special case where matrix B (and hence C) are dense you can create the 
correctly sized matrix C yourself and then call this routine with 
MAT_REUSE_MATRIX, rather than first having MatMatMult() create it for you."

If you are going to change the usage, don't forget to remove this sentence. 
This use case is what we use in SLEPc and is now causing trouble.
Jose



> El 22 sept 2019, a las 18:49, Pierre Jolivet via petsc-dev 
>  escribió:
> 
> 
>> On 22 Sep 2019, at 6:33 PM, Smith, Barry F.  wrote:
>> 
>> 
>>  Ok. So we definitely need better error checking and to clean up the code, 
>> comments and docs 
>> 
>>  As the approaches for these computations of products get more complicated 
>> it becomes a bit harder to support the use of a raw product matrix so I 
>> don't think we want to add all the code needed to call the symbolic part 
>> (after the fact) when the matrix is raw.
> 
> To the best of my knowledge, there is only a single method (not taking MR 
> 2069 into account) that uses a MPIDense B and for which these approaches are 
> necessary, so it’s not like there is a hundred of code paths to fix, but I 
> understand your point.
> 
>> Would that make things terribly difficult for you not being able to use a 
>> raw matrix?
> 
> Definitely not, but that would require some more memory + one copy after the 
> MatMatMult (depending on the size of your block Krylov space, that can be 
> quite large, and that defeats the purpose of MR 2032 of being more memory 
> efficient).
> (BTW, I now remember that I’ve been using this “feature” since our SC16 paper 
> on block Krylov methods)
> 
>>  I suspect that the dense case was just lucky that using a raw matrix 
>> worked. 
> 
> I don’t think so, this is clearly the intent of MatMatMultNumeric_MPIDense 
> (vs. MatMatMultNumeric_MPIAIJ_MPIDense).
> 
>>  The removal of the de facto support for REUSE on the raw matrix should be 
>> added to the changes document.
>> 
>>  Sorry for the difficulties. We have trouble testing all the combinations of 
>> possible usage, even a coverage tool would not have indicated a problems the 
>> lack of lda support. 
> 
> No problem!
> 
> Thank you,
> Pierre
> 
>> Hong,
>> 
>>Can you take a look at these things on Monday and maybe get a clean into 
>> a MR so it gets into the release?
>> 
>>  Thanks
>> 
>> 
>>  Barry
>> 
>> 
>> 
>> 
>> 
>>> On Sep 22, 2019, at 11:12 AM, Pierre Jolivet  
>>> wrote:
>>> 
>>> 
 On 22 Sep 2019, at 6:03 PM, Smith, Barry F.  wrote:
 
 
 
> On Sep 22, 2019, at 10:14 AM, Pierre Jolivet via petsc-dev 
>  wrote:
> 
> FWIW, I’ve fixed MatMatMult and MatTransposeMatMult here 
> https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80
>  (with MAT_INITIAL_MATRIX).
> I believe there is something not right in your MR (2032) with 
> MAT_REUSE_MATRIX (without having called MAT_INITIAL_MATRIX first), cf. 
> https://gitlab.com/petsc/petsc/merge_requests/2069#note_220269898.
> Of course, I’d love to be proved wrong!
 
 I don't understand the context.
 
  MAT_REUSE_MATRIX requires that the C matrix has come from a previous call 
 with MAT_INITIAL_MATRIX, you cannot just put any matrix in the C location.
>>> 
>>> 1) It was not the case before the MR, I’ve used that “feature” (which may 
>>> be specific for MatMatMult_MPIAIJ_MPIDense) for as long as I can remember
>>> 2) If it is not the case anymore, I think it should be mentioned somewhere 
>>> (and not only in the git log, because I don’t think all users will go 
>>> through that)
>>> 3) This comment should be removed from the code as well: 
>>> https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line398
>>> 
 This is documented in the manual page. We should have better error 
 checking that this is the case so the code doesn't crash at memory access 
 but instead produces a very useful error message if the matrix was not 
 obtained with MAT_INITIAL_MATRIX. 
 
 Is this the issue or do I not understand?
>>> 
>>> This is exactly the issue.
>>> 
 Barry
 
 BTW: yes MAT_REUSE_MATRIX has different meanings for different matrix 
 operations in terms of where the matrix came from, this is suppose to be 
 all documented in each methods manual page but some may be missing or 
 incomplete, and error checking is probably not complete for all cases.  
 Perhaps the code should be changed to have multiple different names for 
 each reuse case for clarity to user?
>>> 
>>> Definitely, cf. above.
>>> 
>>> Thanks,
>>> Pierre
>>> 
> 
> Thanks,
> Pierre
> 
>> On 22 Sep 2019, at 5:04 PM, Zhang, Hong  wrote:
>> 
>> I'll check it tomorrow.
>> Hong
>> 
>> On Sun, Sep 22, 2019 at 1:04 AM Pierre Jolivet via petsc-dev 
>>  wrote:
>> Jed,
>> I’m not sure how easy it is to put more than a few lines of code on 
>> 

Re: [petsc-dev] MAT_HERMITIAN

2019-09-11 Thread Jose E. Roman via petsc-dev
Not sure if I understand you. Do you mean that a complex SBAIJ Mat with 
MAT_HERMITIAN flag can be assumed to have zero imaginary part? I don't think 
so. This matrix should have real diagonal entries, but off-diagonal entries 
should be allowed to have nonzero imaginary part. This is what is done in 
MatMult_SeqSBAIJ_1_Hermitian(), where off-diagonal entries are conjugated when 
used for the strict lower triangular part. So I guess the right fix is to 
implement MatMult_SeqSBAIJ_2_Hermitian(), MatMult_SeqSBAIJ_3_Hermitian() and so 
on with appropriate use of PetscConj().

Jose

> El 11 sept 2019, a las 10:36, Pierre Jolivet  
> escribió:
> 
> Nevermind, this is the wrong fix.
> The proper fix is in PETSc. It should not error out if the matrix is also 
> symmetric.
> Indeed, complex symmetric Hermitian => complex with no imaginary part.
> Thus all operations like MatMult, MatMultHermitianTranspose, Cholesky… will 
> work for bs > 1, since all is filled with zeroes.
> I will take care of this, I’m c/c’ing petsc-dev so that they don’t have to 
> “reverse engineer” the trivial change to MatSetOption_SeqSBAIJ.
> 
> Sorry about the noise.
> 
> Thank you,
> Pierre
> 
>> On 10 Sep 2019, at 8:37 AM, Pierre Jolivet  
>> wrote:
>> 
>> Hello,
>> Could you consider not setting MAT_HERMITIAN here 
>> http://slepc.upv.es/documentation/current/src/sys/classes/st/interface/stsles.c.html#line276
>>  when using SBAIJ matrices with bs > 1?
>> This makes PETSc error out with
>> #[1]PETSC ERROR: No support for this operation for this object type
>> #[1]PETSC ERROR: No support for Hermitian with block size greater than 1
>> 
>> The change does not bring any regression, since PETSc is always giving an 
>> error without it, but on the contrary, it improves the range of 
>> applicability of SLEPc, e.g., for complex Hermitian problems with SBAIJ 
>> matrices and bs > 1 that _don’t_ require the flag MAT_HERMITIAN set to true.
>> 
>> Thanks,
>> Pierre
> 



Re: [petsc-dev] Configure issue, PETSC_USE_SOCKET_VIEWER not defined

2019-07-18 Thread Jose E. Roman via petsc-dev
My recent PR#1886 is also related to what Lisandro reports:
 https://bitbucket.org/petsc/petsc/pull-requests/1886/fix-compiler-warning/diff

The corresponding configure.log is here:
http://slepc.upv.es/buildbot/builders/athor-linux-icc-c-complex-int64-mkl/builds/534/steps/Configure%20PETSc/logs/configure.log

Jose



> El 18 jul 2019, a las 15:07, Smith, Barry F. via petsc-dev 
>  escribió:
> 
> 
>  Lisandro,
> 
>Thanks for letting us know. Could you please send configure.log for your 
> failed case. The code to detect and use the variable is still in the PETSc 
> source so I must have introduced something that makes it no longer function 
> correctly. As soon as I can after getting your configure.log I'll debug and 
> fix.
> 
>   Barry
> 
> 
>> On Jul 18, 2019, at 5:21 AM, Lisandro Dalcin  wrote:
>> 
>> PETSC_USE_SOCKET_VIEWER is no longer defined in petsconf.h when configuring 
>> on my Fedora 30. 
>> 
>> I think the problem started in the following commit, the parent of this one 
>> seems to be OK.
>> 
>> commit 2475b7ca256cea2a4b7cbf2d8babcda14e5fa36e
>> Author: Barry Smith 
>> Date:   Sun Jun 30 02:41:52 2019 -0500
>> 
>>Remove testing and inserting into petscconf.h items that are not actually 
>> used by PETSc
>> 
>>1) PETSC_HAVE_LIB - which was rarely used
>>   be careful with the package libpng and libjpeg since they have lib in 
>> the name of the package
>>2) various system include files that are never used or always exist: for 
>> example stdlib.h
>>3) various system functions that are never used or always exist
>>4) fixes for requires for MUMPS and SuperLU_DIST when dependent packages 
>> are installed or not installed (unrelated to the rest of this pull request)
>>5) packages that always exist such as PETSC_HAVE_BLASLAPACK, or are not 
>> used by PETSc such as PETSC_HAVE_NETCFD
>>6) remove a couple of uses of HAVE_LIB* in the code that were not needed 
>> by adjusting the configure code slightly
>>7) remove all the #if guards for each entry in petscconf.h since 
>> petscconf.h already has a guard and
>>   the values are never defined else where the extra guards just make the 
>> file cluttered
>> 
>>For a build with about 10 external packages this reduced the size of 
>> petscconf.h from 1236 lines to 828/4 around 220 entries.
>> 
>>Commit-type: style-fix, cleanup
>> 
>>Reported-by: Jed Brown > 
>> 
>> -- 
>> Lisandro Dalcin
>> 
>> Research Scientist
>> Extreme Computing Research Center (ECRC)
>> King Abdullah University of Science and Technology (KAUST)
>> http://ecrc.kaust.edu.sa/
> 



Re: [petsc-dev] I just removed Fortran support from my development build

2018-10-22 Thread Jose E. Roman
In my experience, ifort is much faster than gfortran building the Fortran 
bindings.
Jose


> El 22 oct 2018, a las 15:18, Matthew Knepley  escribió:
> 
> Jenkins will now catch bad Fortran bindings. However, this is a larger 
> problem. GFortran takes forever to build the bindings. Is it similar for 
> other compilers?
> 
>   Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-dev] Remove legacy tests

2018-09-07 Thread Jose E. Roman
It is in master now, so the error should be fixed.
Jose


> El 7 sept 2018, a las 3:02, Satish Balay  escribió:
> 
> I see the relvent slepc changes are in alex/test-harness
> 
> I've merged balay/remove-Regression.py branch into petsc master. So current 
> slepc master [with petsc master] gives:
> 
>>>>> 
> Checking environment... done
> Checking PETSc installation... done
> Checking ARPACK... done
> Checking LAPACK library... Traceback (most recent call last):
>  File "./configure", line 10, in 
>execfile(os.path.join(os.path.dirname(__file__), 'config', 'configure.py'))
>  File "./config/configure.py", line 321, in 
>testruns = set(petsc.test_runs.split())
> AttributeError: PETSc instance has no attribute 'test_runs'
> <<<<
> 
> This issue comes up with xsdk@develop - but I can work arround it.
> 
> Satish
> 
> On Wed, 5 Sep 2018, Jose E. Roman wrote:
> 
>> It works for us. Thanks.
>> Jose
>> 
>> 
>>> El 5 sept 2018, a las 15:15, Satish Balay  escribió:
>>> 
>>> I pushed the change to balay/remove-Regression.py
>>> 
>>> Satish
>>> 
>>> On Mon, 3 Sep 2018, Jose E. Roman wrote:
>>> 
>>>> We are almost done with migrating SLEPc tests to the new test harness. If 
>>>> you want, you can remove Regression.py from PETSc, as well as any makefile 
>>>> rules that might remain for legacy tests.
>>>> 
>>>> Jose
>>>> 
>>> 
>> 



Re: [petsc-dev] Remove legacy tests

2018-09-05 Thread Jose E. Roman
It works for us. Thanks.
Jose


> El 5 sept 2018, a las 15:15, Satish Balay  escribió:
> 
> I pushed the change to balay/remove-Regression.py
> 
> Satish
> 
> On Mon, 3 Sep 2018, Jose E. Roman wrote:
> 
>> We are almost done with migrating SLEPc tests to the new test harness. If 
>> you want, you can remove Regression.py from PETSc, as well as any makefile 
>> rules that might remain for legacy tests.
>> 
>> Jose
>> 
> 



[petsc-dev] Remove legacy tests

2018-09-03 Thread Jose E. Roman
We are almost done with migrating SLEPc tests to the new test harness. If you 
want, you can remove Regression.py from PETSc, as well as any makefile rules 
that might remain for legacy tests.

Jose



Re: [petsc-dev] Undefined symbols for _kspfgmresmodifypcksp_ and _kspfgmresmodifypcnochange_ when rebuilding

2018-08-22 Thread Jose E. Roman



> El 22 ago 2018, a las 12:52, Matthew Knepley  escribió:
> 
> On Wed, Aug 22, 2018 at 6:35 AM Lawrence Mitchell  wrote:
> 
> > On 22 Aug 2018, at 10:04, Patrick Sanan  wrote:
> > 
> > This happens fairly frequently when I try to switch/update branches of 
> > PETSc (here invoked by building my own code, but the error message looks 
> > the same with "make check"):
> > 
> > $ make
> > /Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/bin/mpicc 
> > -o runme.o -c -Wall -Wwrite-strings -Wno-strict-aliasing 
> > -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3   
> > -I/Users/patrick/petsc-stagbl/include 
> > -I/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/include 
> > -I/opt/X11/include`pwd`/runme.c
> > /Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/bin/mpicc 
> > -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress 
> > -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind 
> > -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress 
> > -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind
> > -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
> > -fstack-protector -fvisibility=hidden -g3  -o runme runme.o 
> > -Wl,-rpath,/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
> >  -L/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib 
> > -Wl,-rpath,/Users/patrick/petsc-stagbl/arch-darwin-stagbl-double-extra-debug/lib
> >  -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib 
> > -Wl,-rpath,/opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0 
> > -L/opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0 
> > -Wl,-rpath,/opt/local/lib/gcc7 -L/opt/local/lib/gcc7 -lpetsc -lcmumps 
> > -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack 
> > -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig 
> > -lsuperlu_dist -lHYPRE -lsundials_cvode -lsundials_nvecserial 
> > -lsundials_nvecparallel -llapack -lblas -lparmetis -lmetis -lX11 -lyaml 
> > -lstdc++ -ldl -lmpifort -lmpi -lpmpi -lgfortran -lquadmath -lm -lstdc++ -ldl
> > Undefined symbols for architecture x86_64:
> >   "_kspfgmresmodifypcksp_", referenced from:
> >   import-atom in libpetsc.dylib
> >   "_kspfgmresmodifypcnochange_", referenced from:
> >   import-atom in libpetsc.dylib
> > ld: symbol(s) not found for architecture x86_64
> > collect2: error: ld returned 1 exit status
> > 
> > I don't know why this is, exactly. Maybe it's more obvious from the 
> > perspective of someone more expert on the Fortran interface, and we could 
> > save some time reconfiguring (if these two symbols are really the only 
> > issue).
> > 
> >  For these two symbols, the corresponding functions are declared but not 
> > defined in
> > 
> > src/ksp/ksp/impls/gmres/fgmres/ftn-custom/zmodpcff.c
> > 
> > "make deletefortranstubs" by itself doesn't seem to solve the problem. My 
> > sledgehammer workaround is to do everything short of blowing away my entire 
> > $PETSC_ARCH directory:
> > 
> > make deletefortranstubs && make allclean && make reconfigure && make && 
> > make check
> 
> 
> Does it work to do:
> 
> make allfortranstubs && make
> 
> In these cases?
> 
> Lawrence is correct. Here is what is happening.
> 
> Someone changes an interface, and you pull the change. The header changes 
> will cause all the C files
> using that API to rebuild. However, the doc system (sowing) runs bfort on the 
> C file to generate the Fortran
> binding. It runs on all headers at once, so there is no separately rule for 
> bforting a C file when it changes.
> Things are now even worse, since we have Python code separate from bfort 
> which create the Fortran
> modules, which also will not fire on updates to the C file.
> 
> The simplest fix is that you know that every time you see this problem, you 
> rerun 'make allfortranstubs'.
> The complicate fix is to rewrite bfort and the module generation into one 
> program which respects the
> dependency information. Since there is literally no credit associated with 
> this job, it is unlikely ever to happen.
> We await the passing of the last Fortran programmer.

Another fix is to have a custom Fortran stub for KSPFGMRESModifyPCKSP() and 
KSPFGMRESModifyPCNoChange(), rather than an automatic Fortran stub. That is, 
change /*@ to /*@C and add a definition for these functions in zmodpcff.c

Jose
 

> 
>Matt
>  
> I used to have to do this, until eventually I gave up and built without the 
> fortran interfaces (may not be an option).
> 
> I tried to unpick the make rules so that if you built with fortran 
> interfaces, the generation of individual interface would depend on the 
> relevant C files, but gave up, because I couldn't see what was going on.
> 
> Cheers,
> 
> Lawrence
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which 

Re: [petsc-dev] Remove legacy tests?

2018-07-06 Thread Jose E. Roman
Well, if you want to remove it, I can just insert 
${PETSC_DIR}/lib/petsc/conf/test in SLEPc's repository, so not a big problem.


> El 6 jul 2018, a las 7:46, Jose E. Roman  escribió:
> 
> SLEPc still uses the legacy test system. I have not had time to move to the 
> new test harness.
> Jose
> 
> 
>> El 6 jul 2018, a las 2:42, Smith, Barry F.  escribió:
>> 
>> 
>> 
>>> On Jul 5, 2018, at 5:36 PM, Jed Brown  wrote:
>>> 
>>> When can we delete the legacy test system?  Are we currently using it
>>> anywhere?
>> 
>> Make test currently requires the test include file
>> 
>>  Barry
>> 
>> 
> 



Re: [petsc-dev] Remove legacy tests?

2018-07-05 Thread Jose E. Roman
SLEPc still uses the legacy test system. I have not had time to move to the new 
test harness.
Jose


> El 6 jul 2018, a las 2:42, Smith, Barry F.  escribió:
> 
> 
> 
>> On Jul 5, 2018, at 5:36 PM, Jed Brown  wrote:
>> 
>> When can we delete the legacy test system?  Are we currently using it
>> anywhere?
> 
>  Make test currently requires the test include file
> 
>   Barry
> 
> 



Re: [petsc-dev] Missing typedef ?

2018-01-28 Thread Jose E. Roman
MatSolverPackage has been renamed to MatSolverType in a recent commit in master.
Jose

> El 28 ene 2018, a las 17:50, Franck Houssen  
> escribió:
> 
> Hello,
> 
> In petscmat.h, the line "#define MatSolverPackage char*" shouldn't it be 
> replaced with "typedef char* MatSolverPackage" ? (like it's done for MatType 
> and others)
> 
> When trying to use MatSolverPackage from a cpp file where "petsc.h" and 
> "petscmat.h" have been included I get this error : error: ‘MatSolverPackage’ 
> does not name a type; did you mean ‘MatSolverType’?.
> Looks like this error does occur, or not, depending on compiler / OS : this 
> is OK with debian but not with ubuntu-trusty (line 2191 here 
> https://travis-ci.org/fghoussen/geneo4PETSc/jobs/334384582).
> I would say the compile error can occur, or not, according to what 
> -fvisibility defaults to.
> 
> Not sure of this...
> 
> Franck



Re: [petsc-dev] new test harness in PETSc

2018-01-25 Thread Jose E. Roman
Are you going to keep the old makefiles? (I mean 
${PETSC_DIR}/lib/petsc/conf/test) In SLEPc we still use makefiles for the 
tests. I should move to the new system, but don't have time at the moment.

Jose


> El 25 ene 2018, a las 5:47, Smith, Barry F.  escribió:
> 
> 
>   PETSc developers,
> 
> We have completed moving all PETSc examples over from the old test system 
> (where tests were written in the makefile) to a new system, provided by Scott 
> Kruger, where the test rules are written in bottom of the source file of the 
> example. Directions for usage and adding new tests can be found in the PETSc 
> developers manual http://www.mcs.anl.gov/petsc/petsc-dev/docs/developers.pdf 
> chapter 7.
> 
>  Barry
> 
> 



Re: [petsc-dev] [SPAM *****] Re: Issue with Lapack names

2017-12-19 Thread Jose E. Roman


> El 18 dic 2017, a las 22:34, Karl Rupp  escribió:
> 
> 
> 
>> > > This is related to a message I sent 2 years ago to petsc-maint
>>"Inconsistent naming of one Lapack subroutine", where I advocated
>>renaming LAPACKungqr_ --> LAPACKorgqr_. But that thread did not end
>>up in any modification...
>> > >
>> > > I can't find the thread. I also do not understand the problem.
>>Are you saying that the check succeeds but the routines is still
>>missing?
>> >
>> > No, the opposite. The routines are there, but since configure
>>decided (wrongly) that they are missing, the check would fail at run
>>time complaining that the routines are missing.
>> >
>> > Ah. Why does the check fail? It does succeed for a number of them.
>>I don't know the exact reason, but it has to do with the names of
>>real/complex subroutines. I guess the test is checking for dungqr,
>>which does not exist - it should check for either dorgqr or zungqr.
>>Before that commit, there were only checks for "real" names, but
>>after the commit there are a mix of real and complex subroutines.
>> Now I really want to punch one of the LAPACK guys in the face. Which one...
>> Karl, I think it is enough right now to change the complex names, like ungqr 
>> to orgqr as Jose suggests. Will this work for you?
> 
> works for me, yes.
> If possible, I'd like to preserve the auto-generated nature of this list. If 
> 'dungqr' is the only exception, then please adjust the list of tests 
> accordingly *and* add a comment to BlasLapack.py saying why 'dungqr' is 
> special.
> 
> Best regards,
> Karli
> 

I have created a pull request for this.
https://bitbucket.org/petsc/petsc/pull-requests/826/fix-test-for-missing-lapack-subroutines/diff
Jose


> 
> 
>> >
>> >   Thanks,
>> >
>> > Matt
>> >
>> > Jose
>> >
>> > >
>> > >   Thanks,
>> > >
>> > >  Matt
>> > >
>> > >
>> > > Jose
>> > > --
>> > > What most experimenters take for granted before they begin
>>their experiments is infinitely more interesting than any results to
>>which their experiments lead.
>> > > -- Norbert Wiener
>> > >
>> > > https://www.cse.buffalo.edu/~knepley/
>>
>> >
>> >
>> >
>>>
>>> --
>>> What most experimenters take for granted before they begin their 
>> experiments is infinitely more interesting than any results to which their 
>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-dev] Issue with Lapack names

2017-12-18 Thread Jose E. Roman


> El 18 dic 2017, a las 18:58, Matthew Knepley <knep...@gmail.com> escribió:
> 
> On Mon, Dec 18, 2017 at 12:30 PM, Jose E. Roman <jro...@dsic.upv.es> wrote:
> I find the following definitions in petscconf.h, which are wrong because the 
> corresponding subroutines are present.
> 
> #define PETSC_MISSING_LAPACK_UNGQR 1
> #define PETSC_MISSING_LAPACK_HETRS 1
> #define PETSC_MISSING_LAPACK_HETRF 1
> #define PETSC_MISSING_LAPACK_HETRI 1
> 
> This did not happen in 3.8, it is due to this change:
> https://bitbucket.org/petsc/petsc/commits/b8695a4a8c7
> 
> So now one cannot use PETSC_MISSING_LAPACK_UNGQR to protect a code that calls 
> LAPACKungqr_
> 
> This is related to a message I sent 2 years ago to petsc-maint "Inconsistent 
> naming of one Lapack subroutine", where I advocated renaming LAPACKungqr_ --> 
> LAPACKorgqr_. But that thread did not end up in any modification...
> 
> I can't find the thread. I also do not understand the problem. Are you saying 
> that the check succeeds but the routines is still missing?

No, the opposite. The routines are there, but since configure decided (wrongly) 
that they are missing, the check would fail at run time complaining that the 
routines are missing.

Jose

> 
>   Thanks,
> 
>  Matt
>  
> 
> Jose
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



[petsc-dev] Issue with Lapack names

2017-12-18 Thread Jose E. Roman
I find the following definitions in petscconf.h, which are wrong because the 
corresponding subroutines are present.

#define PETSC_MISSING_LAPACK_UNGQR 1
#define PETSC_MISSING_LAPACK_HETRS 1
#define PETSC_MISSING_LAPACK_HETRF 1
#define PETSC_MISSING_LAPACK_HETRI 1

This did not happen in 3.8, it is due to this change:
https://bitbucket.org/petsc/petsc/commits/b8695a4a8c7

So now one cannot use PETSC_MISSING_LAPACK_UNGQR to protect a code that calls 
LAPACKungqr_

This is related to a message I sent 2 years ago to petsc-maint "Inconsistent 
naming of one Lapack subroutine", where I advocated renaming LAPACKungqr_ --> 
LAPACKorgqr_. But that thread did not end up in any modification...

Jose



Re: [petsc-dev] SLEPc failure

2017-11-02 Thread Jose E. Roman
Could you please try the following modified function? It should replace the one 
in $SLEPC_DIR/include/slepc/private/bvimpl.h
Thanks.

PETSC_STATIC_INLINE PetscErrorCode BV_SafeSqrt(BV bv,PetscScalar 
alpha,PetscReal *res)
{
  PetscErrorCode ierr;
  PetscReal  absal,realp;

  PetscFunctionBegin;
  absal = PetscAbsScalar(alpha);
  realp = PetscRealPart(alpha);
  if (absal<PETSC_MACHINE_EPSILON) {
ierr = PetscInfo(bv,"Zero norm, either the vector is zero or a semi-inner 
product is being used\n");CHKERRQ(ierr);
  }
#if defined(PETSC_USE_COMPLEX)
  if (PetscAbsReal(PetscImaginaryPart(alpha))>PETSC_MACHINE_EPSILON && 
PetscAbsReal(PetscImaginaryPart(alpha))/absal>100*PETSC_MACHINE_EPSILON) 
SETERRQ1(PetscObjectComm((PetscObject)bv),1,"The inner product is not well 
defined: nonzero imaginary part %g",PetscImaginaryPart(alpha));
#endif
  if (bv->indef) {
*res = (realp<0.0)? -PetscSqrtReal(-realp): PetscSqrtReal(realp);
  } else {
if (realp<-10*PETSC_MACHINE_EPSILON) 
SETERRQ(PetscObjectComm((PetscObject)bv),1,"The inner product is not well 
defined: indefinite matrix");
*res = (realp<0.0)? 0.0: PetscSqrtReal(realp);
  }
  PetscFunctionReturn(0);
}



> El 31 oct 2017, a las 16:17, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> Thanks, this is helpful ! At least, I have some clues to dig deeper into.
> 
> The data set I have seems to be very sensitive: I knew B would likely be 
> close to singular (even arpack, which seems to be the most stable, is not 
> always robust enough depending on use cases and/or number of domains). To get 
> things stable with arpack, I had to add "-mat_mumps_cntl_1 0.01 
> -mat_mumps_cntl_3 -0.0001 -mat_mumps_cntl_4 0.0001": so I let all that also 
> when testing with krylovschur. I've just tried to suppress these extra 
> options: I get the MUMPS error in numerical factorization. At least, it make 
> sense...
> 
> Now, I added -eps_gen_non_hermitian and failed with EPS_DIVERGED_ITS. 
> Increasing -eps_max_it does not help. So, I guess this is the end of the road.
> 
> Would you consider to expose (in future release) the tolerance of this check 
> ? Or is this something you really want to keep private ? (whatever B is or 
> not singular - I guess in my case, this would have not helped anyway)
> 
> Franck
> 
> - Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Lundi 30 Octobre 2017 18:11:48
>> Objet: Re: [petsc-dev] SLEPc failure
>> 
>> I am getting a MUMPS error in numerical factorization...
>> Anyway, your B matrix is singular, with a high-dimensional nullspace. Maybe
>> this is producing small negative values when computing v'*B*v.
>> There is no way to relax the check. You should solve the problem as
>> non-symmetric. Or use Arpack if it works for you.
>> 
>> Jose
>> 
>> 
>> 
>>> El 30 oct 2017, a las 17:16, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> I deal with domain decomposition. It was faster/easier to generate files
>>> with 1 proc: I guess the "dirichlet" and "neumann" matrices are the same
>>> in this case, so one get the same files in the end... I didn't realized
>>> that when I sent the files. My mistake.
>>> 
>>> At my side, when I use krylovschur, SLEPc fails using 1, 2, 4, 8, ... procs
>>> (each proc performs a SLEPc solve that may fail or not - difficult to
>>> catch one). For instance, I attached data of one failed domain out of 8 (8
>>> MPI procs): matrices are very close but different. Moreover, I added EPS
>>> logs of the same run and domain but replacing krylovschur with arpack when
>>> SLEPc does not fail (regarding your remark that B could be indefinite, I
>>> added -mat_mumps_icntl_33 1 to get the determinant).
>>> 
>>> Anyway, I don't expect you spend too much time on this. My understanding is
>>> that there is no way to relax this check ? Correct ?
>>> 
>>> Franck
>>> 
>>> - Mail original -
>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>> Cc: "For users of the development version of PETSc"
>>>> <petsc-dev@mcs.anl.gov>
>>>> Envoyé: Samedi 28 Octobre 2017 17:40:41
>>>> Objet: Re: [petsc-dev] SLEPc failure
>>>> 
>>>> The two matrices are the 

Re: [petsc-dev] SLEPc failure

2017-10-30 Thread Jose E. Roman
I am getting a MUMPS error in numerical factorization...
Anyway, your B matrix is singular, with a high-dimensional nullspace. Maybe 
this is producing small negative values when computing v'*B*v.
There is no way to relax the check. You should solve the problem as 
non-symmetric. Or use Arpack if it works for you.

Jose



> El 30 oct 2017, a las 17:16, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> I deal with domain decomposition. It was faster/easier to generate files with 
> 1 proc: I guess the "dirichlet" and "neumann" matrices are the same in this 
> case, so one get the same files in the end... I didn't realized that when I 
> sent the files. My mistake.
> 
> At my side, when I use krylovschur, SLEPc fails using 1, 2, 4, 8, ... procs 
> (each proc performs a SLEPc solve that may fail or not - difficult to catch 
> one). For instance, I attached data of one failed domain out of 8 (8 MPI 
> procs): matrices are very close but different. Moreover, I added EPS logs of 
> the same run and domain but replacing krylovschur with arpack when SLEPc does 
> not fail (regarding your remark that B could be indefinite, I added 
> -mat_mumps_icntl_33 1 to get the determinant). 
> 
> Anyway, I don't expect you spend too much time on this. My understanding is 
> that there is no way to relax this check ? Correct ?
> 
> Franck
> 
> - Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Samedi 28 Octobre 2017 17:40:41
>> Objet: Re: [petsc-dev] SLEPc failure
>> 
>> The two matrices are the same ...
>> 
>>> El 28 oct 2017, a las 13:11, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> I just added that before EPSSetOperators:
>>> PetscViewer viewerA;
>>> PetscViewerBinaryOpen(PETSC_COMM_WORLD,"Atau.out",FILE_MODE_WRITE,);
>>> MatView(A,viewerA);
>>> PetscViewer viewerB;
>>> PetscViewerBinaryOpen(PETSC_COMM_WORLD,"Btau.out",FILE_MODE_WRITE,);
>>> MatView(B,viewerB);
>>> 
>>> At first, I avoided binary as I didn't know if the format handles
>>> big/little endianness... So I prefered ASCII.
>>> 
>>> Binary data are attached.
>>> 
>>> Franck
>>> 
>>> PS : running debian with little endian.
>>>>> python -c "import sys;print(0 if sys.byteorder=='big' else 1)"
>>> 1
>>> 
>>> 
>>> - Mail original -
>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>> Cc: "For users of the development version of PETSc"
>>>> <petsc-dev@mcs.anl.gov>
>>>> Envoyé: Vendredi 27 Octobre 2017 18:52:56
>>>> Objet: Re: [petsc-dev] SLEPc failure
>>>> 
>>>> I cannot load the files you sent. Please send the matrices in binary
>>>> format.
>>>> The easiest way is to run your program with -eps_view_mat0 binary:Atau.bin
>>>> -eps_view_mat1 binary:Btau.bin
>>>> 
>>>> However, the files are written at the end of EPSSolve() so if the solve
>>>> fails
>>>> then it will not create the files. You can try running with -eps_max_it 1
>>>> or add code in your main program to write the matrices.
>>>> 
>>>> Jose
>>>> 
>>>> 
>>>>> El 27 oct 2017, a las 12:28, Franck Houssen <franck.hous...@inria.fr>
>>>>> escribió:
>>>>> 
>>>>> Maybe could be convenient for the users to have an option (or an
>>>>> EPSSetXXX)
>>>>> to relax that check ?
>>>>> Data are attached.
>>>>> 
>>>>> Franck
>>>>> 
>>>>> - Mail original -
>>>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>>>> Cc: "For users of the development version of PETSc"
>>>>>> <petsc-dev@mcs.anl.gov>
>>>>>> Envoyé: Vendredi 27 Octobre 2017 10:15:44
>>>>>> Objet: Re: [petsc-dev] SLEPc failure
>>>>>> 
>>>>>> There is no new option. What I mean is that from 3.7 to 3.8 we changed
>>>>>> the
>>>>>> line that produces this error. 

Re: [petsc-dev] QR factorization of dense matrix

2017-10-30 Thread Jose E. Roman
Any BV type will do. The default BVSVEC is generally best.
Jose


> El 30 oct 2017, a las 17:18, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> It was not clear to me when I read the doc. That's OK now: got it to work, 
> thanks Jose !
> Just to make sure, to make it work, I had to set a BV type: I chose BVMAT as 
> I use BVCreateFromMat. Is that the good type ? (BVVECS works too)
> 
> Franck
> 
> ----- Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Samedi 28 Octobre 2017 16:56:22
>> Objet: Re: [petsc-dev] QR factorization of dense matrix
>> 
>> Matrix R must be mxm.
>> BVOrthogonalize computes Z=Q*R, where Q overwrites Z.
>> Jose
>> 
>>> El 28 oct 2017, a las 13:11, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> I've seen that !... But can't get BVOrthogonalize to work.
>>> 
>>> I tried:
>>> Mat Z; MatCreateSeqDense(PETSC_COMM_SELF, n, m, NULL, );
>>> ...; // MatSetValues(Z, ...)
>>> BVCreate(PETSC_COMM_SELF, );
>>> BVCreateFromMat(Z, ); // Z is tall-skinny
>>> Mat R; MatCreateSeqDense(PETSC_COMM_SELF, n, m, NULL, ); // Same n, m
>>> than Z.
>>> BVOrthogonalize(bv, R);
>>> 
>>> But BVOrthogonalize fails with :
>>>> [0]PETSC ERROR: Nonconforming object sizes
>>>> [0]PETSC ERROR: Mat argument is not square, it has 1 rows and 3 columns
>>> 
>>> So, as I didn't get what's wrong, I was looking for another way to do this.
>>> 
>>> Franck
>>> 
>>> - Mail original -
>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>> Cc: "For users of the development version of PETSc"
>>>> <petsc-dev@mcs.anl.gov>
>>>> Envoyé: Vendredi 27 Octobre 2017 19:03:37
>>>> Objet: Re: [petsc-dev] QR factorization of dense matrix
>>>> 
>>>> Franck,
>>>> 
>>>> SLEPc has some support for this, but it is intended only for tall-skinny
>>>> matrices, that is, when the number of columns is much smaller than rows.
>>>> For
>>>> an almost square matrix you should not use it.
>>>> 
>>>> Have a look at this
>>>> http://slepc.upv.es/documentation/current/docs/manualpages/BV/BVOrthogonalize.html
>>>> http://slepc.upv.es/documentation/current/docs/manualpages/BV/BVOrthogBlockType.html
>>>> 
>>>> You can see there are three methods. All of them have drawbacks:
>>>> GS: This is a Gram-Schmidt QR, computed column by column, so it is slower
>>>> than the other two. However, it is robust.
>>>> CHOL: Cholesky QR, it is not numerically stable. In the future we will add
>>>> Cholesky QR2.
>>>> TSQR: Unfortunately this is not implemented in parallel. I wanted to add
>>>> the
>>>> parallel version for 3.8, but didn't have time. It will be added soon.
>>>> 
>>>> You can use BVCreateFromMat() to create a BV object from a Mat.
>>>> 
>>>> Jose
>>>> 
>>>> 
>>>>> El 27 oct 2017, a las 18:39, Franck Houssen <franck.hous...@inria.fr>
>>>>> escribió:
>>>>> 
>>>>> I am looking for QR factorization of (sequential) dense matrix: is this
>>>>> available in PETSc ? I "just" need the diagonal of R (I do not need
>>>>> neither the full content of R, nor Q)
>>>>> 
>>>>> I found that (old !) thread
>>>>> https://lists.mcs.anl.gov/pipermail/petsc-users/2013-November/019577.html
>>>>> that says it could be implemented: has it been done ?
>>>>> As for a direct solve, the way to go is "KSPSetType(ksp, KSPPREONLY);
>>>>> PCSetType(pc, PCLU);", I was expecting something like "KSPSetType(ksp,
>>>>> KSPPREONLY); PCSetType(pc, PCQR);"... But it seems there is no PCQR
>>>>> available. Or is it possible to do that using "an iterative way" with a
>>>>> specific kind of KSP that triggers a Gram Schmidt orthogonalization in
>>>>> back-end ? (I have seen a KSPLSQR but could I get Q and R back ? As I
>>>>> understand this, I would say no: I would say the user can only ge

Re: [petsc-dev] SLEPc failure

2017-10-28 Thread Jose E. Roman
The two matrices are the same ...

> El 28 oct 2017, a las 13:11, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> I just added that before EPSSetOperators:
> PetscViewer viewerA; 
> PetscViewerBinaryOpen(PETSC_COMM_WORLD,"Atau.out",FILE_MODE_WRITE,); 
> MatView(A,viewerA);
> PetscViewer viewerB; 
> PetscViewerBinaryOpen(PETSC_COMM_WORLD,"Btau.out",FILE_MODE_WRITE,); 
> MatView(B,viewerB);
> 
> At first, I avoided binary as I didn't know if the format handles big/little 
> endianness... So I prefered ASCII.
> 
> Binary data are attached.
> 
> Franck
> 
> PS : running debian with little endian.
>>> python -c "import sys;print(0 if sys.byteorder=='big' else 1)"
> 1
> 
> 
> - Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Vendredi 27 Octobre 2017 18:52:56
>> Objet: Re: [petsc-dev] SLEPc failure
>> 
>> I cannot load the files you sent. Please send the matrices in binary format.
>> The easiest way is to run your program with -eps_view_mat0 binary:Atau.bin
>> -eps_view_mat1 binary:Btau.bin
>> 
>> However, the files are written at the end of EPSSolve() so if the solve fails
>> then it will not create the files. You can try running with -eps_max_it 1
>> or add code in your main program to write the matrices.
>> 
>> Jose
>> 
>> 
>>> El 27 oct 2017, a las 12:28, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> Maybe could be convenient for the users to have an option (or an EPSSetXXX)
>>> to relax that check ?
>>> Data are attached.
>>> 
>>> Franck
>>> 
>>> - Mail original -
>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>> Cc: "For users of the development version of PETSc"
>>>> <petsc-dev@mcs.anl.gov>
>>>> Envoyé: Vendredi 27 Octobre 2017 10:15:44
>>>> Objet: Re: [petsc-dev] SLEPc failure
>>>> 
>>>> There is no new option. What I mean is that from 3.7 to 3.8 we changed the
>>>> line that produces this error. But it seems that it is still failing in
>>>> your
>>>> problem. Maybe your B matrix is indefinite or not exactly symmetric. Can
>>>> you
>>>> send me the matrices?
>>>> Jose
>>>> 
>>>>> El 27 oct 2017, a las 9:57, Franck Houssen <franck.hous...@inria.fr>
>>>>> escribió:
>>>>> 
>>>>> I use the development version (bitbucket clone). How to relax the check ?
>>>>> At command line option ?
>>>>> 
>>>>> Franck
>>>>> 
>>>>> - Mail original -
>>>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>>>> Cc: "For users of the development version of PETSc"
>>>>>> <petsc-dev@mcs.anl.gov>
>>>>>> Envoyé: Jeudi 26 Octobre 2017 18:49:22
>>>>>> Objet: Re: [petsc-dev] SLEPc failure
>>>>>> 
>>>>>> 
>>>>>>> El 26 oct 2017, a las 18:36, Franck Houssen <franck.hous...@inria.fr>
>>>>>>> escribió:
>>>>>>> 
>>>>>>> Here is a stack I end up with when trying to solve an eigen problem
>>>>>>> (real,
>>>>>>> sym, generalized) with SLEPc. My understanding is that, during the Gram
>>>>>>> Schmidt orthogonalisation, the projection of one basis vector turns out
>>>>>>> to
>>>>>>> be null.
>>>>>>> First, is this correct ? Second, in such cases, are there some
>>>>>>> recommended
>>>>>>> "recipe" to test/try (options) to get a clue on the problem ? (I would
>>>>>>> unfortunately perfectly understand the answer could be no !... As this
>>>>>>> totally depends on A/B).
>>>>>>> 
>>>>>>> With arpack, the eigen problem is solved (so the matrix A and B I use
>>>>>>> seems
>>>>>>> to be relevant). But, when I change from arpack to
>>>>>>> krylovschur/ciss/arnoldi, I get the stack below.
>>>>>>> 
>>>>>>> Franck
>>>>>>> 
>>>>>>> [0]PETSC ERROR: #1 BV_SafeSqrt()
>>>>>>> [0]PETSC ERROR: #2 BVNorm_Private()
>>>>>>> [0]PETSC ERROR: #3 BVNormColumn()
>>>>>>> [0]PETSC ERROR: #4 BV_NormVecOrColumn()
>>>>>>> [0]PETSC ERROR: #5 BVOrthogonalizeCGS1()
>>>>>>> [0]PETSC ERROR: #6 BVOrthogonalizeGS()
>>>>>>> [0]PETSC ERROR: #7 BVOrthonormalizeColumn()
>>>>>>> [0]PETSC ERROR: #8 EPSFullLanczos()
>>>>>>> [0]PETSC ERROR: #9 EPSSolve_KrylovSchur_Symm()
>>>>>>> [0]PETSC ERROR: #10 EPSSolve()
>>>>>> 
>>>>>> Is this with SLEPc 3.8? In SLEPc 3.8 we relaxed this check so I would
>>>>>> suggest
>>>>>> trying with it.
>>>>>> Jose
>>>>>> 
>>>>>> 
>>>> 
>>>> 
>>> 
>> 
>> 
> 



Re: [petsc-dev] QR factorization of dense matrix

2017-10-28 Thread Jose E. Roman
Matrix R must be mxm.
BVOrthogonalize computes Z=Q*R, where Q overwrites Z.
Jose

> El 28 oct 2017, a las 13:11, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> I've seen that !... But can't get BVOrthogonalize to work.
> 
> I tried:
> Mat Z; MatCreateSeqDense(PETSC_COMM_SELF, n, m, NULL, );
> ...; // MatSetValues(Z, ...)
> BVCreate(PETSC_COMM_SELF, );
> BVCreateFromMat(Z, ); // Z is tall-skinny
> Mat R; MatCreateSeqDense(PETSC_COMM_SELF, n, m, NULL, ); // Same n, m than 
> Z.
> BVOrthogonalize(bv, R);
> 
> But BVOrthogonalize fails with :
>> [0]PETSC ERROR: Nonconforming object sizes
>> [0]PETSC ERROR: Mat argument is not square, it has 1 rows and 3 columns
> 
> So, as I didn't get what's wrong, I was looking for another way to do this.
> 
> Franck
> 
> - Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Vendredi 27 Octobre 2017 19:03:37
>> Objet: Re: [petsc-dev] QR factorization of dense matrix
>> 
>> Franck,
>> 
>> SLEPc has some support for this, but it is intended only for tall-skinny
>> matrices, that is, when the number of columns is much smaller than rows. For
>> an almost square matrix you should not use it.
>> 
>> Have a look at this
>> http://slepc.upv.es/documentation/current/docs/manualpages/BV/BVOrthogonalize.html
>> http://slepc.upv.es/documentation/current/docs/manualpages/BV/BVOrthogBlockType.html
>> 
>> You can see there are three methods. All of them have drawbacks:
>> GS: This is a Gram-Schmidt QR, computed column by column, so it is slower
>> than the other two. However, it is robust.
>> CHOL: Cholesky QR, it is not numerically stable. In the future we will add
>> Cholesky QR2.
>> TSQR: Unfortunately this is not implemented in parallel. I wanted to add the
>> parallel version for 3.8, but didn't have time. It will be added soon.
>> 
>> You can use BVCreateFromMat() to create a BV object from a Mat.
>> 
>> Jose
>> 
>> 
>>> El 27 oct 2017, a las 18:39, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> I am looking for QR factorization of (sequential) dense matrix: is this
>>> available in PETSc ? I "just" need the diagonal of R (I do not need
>>> neither the full content of R, nor Q)
>>> 
>>> I found that (old !) thread
>>> https://lists.mcs.anl.gov/pipermail/petsc-users/2013-November/019577.html
>>> that says it could be implemented: has it been done ?
>>> As for a direct solve, the way to go is "KSPSetType(ksp, KSPPREONLY);
>>> PCSetType(pc, PCLU);", I was expecting something like "KSPSetType(ksp,
>>> KSPPREONLY); PCSetType(pc, PCQR);"... But it seems there is no PCQR
>>> available. Or is it possible to do that using "an iterative way" with a
>>> specific kind of KSP that triggers a Gram Schmidt orthogonalization in
>>> back-end ? (I have seen a KSPLSQR but could I get Q and R back ? As I
>>> understand this, I would say no: I would say the user can only get the
>>> solution)
>>> 
>>> Is it possible to QR a (sequential) dense matrix in PETSc ? If yes, what
>>> are the steps to follow ?
>>> 
>>> Franck
>>> 
>>> My understanding is that DGEQRF from lapack can do "more" than what I need,
>>> but, no sure to get if I can use it from PETSc through a KSP:
>>>>> git grep DGEQRF
>>> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ DGEQRF
>>>>> git grep LAPACKgeqrf_
>>> include/petscblaslapack.h:PETSC_EXTERN void
>>> LAPACKgeqrf_(PetscBLASInt*,PetscBLASInt*,PetscScalar*,PetscBLASInt*,PetscScalar*,PetscScalar*,PetscBLASInt*,PetscBLASInt*);
>>> include/petscblaslapack_mangle.h:#define LAPACKgeqrf_
>>> PETSCBLAS(geqrf,GEQRF)
>>> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ SGEQRF
>>> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ DGEQRF
>>> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ CGEQRF
>>> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ ZGEQRF
>>> include/petscblaslapack_stdcall.h:PETSC_EXTERN void PETSC_STDCALL
>>> LAPACKgeqrf_(PetscBLASInt*,PetscBLASInt*,PetscScalar*,PetscBLASInt*,PetscScalar*,PetscScalar*,PetscBLASInt*,PetscBLASInt*);
>>> src/dm/dt/interface/dt.c:
>>> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(,,A,,tau,w

Re: [petsc-dev] QR factorization of dense matrix

2017-10-27 Thread Jose E. Roman
Franck,

SLEPc has some support for this, but it is intended only for tall-skinny 
matrices, that is, when the number of columns is much smaller than rows. For an 
almost square matrix you should not use it.

Have a look at this
http://slepc.upv.es/documentation/current/docs/manualpages/BV/BVOrthogonalize.html
http://slepc.upv.es/documentation/current/docs/manualpages/BV/BVOrthogBlockType.html

You can see there are three methods. All of them have drawbacks:
GS: This is a Gram-Schmidt QR, computed column by column, so it is slower than 
the other two. However, it is robust.
CHOL: Cholesky QR, it is not numerically stable. In the future we will add 
Cholesky QR2.
TSQR: Unfortunately this is not implemented in parallel. I wanted to add the 
parallel version for 3.8, but didn't have time. It will be added soon.

You can use BVCreateFromMat() to create a BV object from a Mat.

Jose


> El 27 oct 2017, a las 18:39, Franck Houssen  
> escribió:
> 
> I am looking for QR factorization of (sequential) dense matrix: is this 
> available in PETSc ? I "just" need the diagonal of R (I do not need neither 
> the full content of R, nor Q)
> 
> I found that (old !) thread 
> https://lists.mcs.anl.gov/pipermail/petsc-users/2013-November/019577.html 
> that says it could be implemented: has it been done ?
> As for a direct solve, the way to go is "KSPSetType(ksp, KSPPREONLY); 
> PCSetType(pc, PCLU);", I was expecting something like "KSPSetType(ksp, 
> KSPPREONLY); PCSetType(pc, PCQR);"... But it seems there is no PCQR 
> available. Or is it possible to do that using "an iterative way" with a 
> specific kind of KSP that triggers a Gram Schmidt orthogonalization in 
> back-end ? (I have seen a KSPLSQR but could I get Q and R back ? As I 
> understand this, I would say no: I would say the user can only get the 
> solution)
> 
> Is it possible to QR a (sequential) dense matrix in PETSc ? If yes, what are 
> the steps to follow ?
> 
> Franck
> 
> My understanding is that DGEQRF from lapack can do "more" than what I need, 
> but, no sure to get if I can use it from PETSc through a KSP:
> >> git grep DGEQRF
> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ DGEQRF
> >> git grep LAPACKgeqrf_
> include/petscblaslapack.h:PETSC_EXTERN void 
> LAPACKgeqrf_(PetscBLASInt*,PetscBLASInt*,PetscScalar*,PetscBLASInt*,PetscScalar*,PetscScalar*,PetscBLASInt*,PetscBLASInt*);
> include/petscblaslapack_mangle.h:#define LAPACKgeqrf_ PETSCBLAS(geqrf,GEQRF)
> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ SGEQRF
> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ DGEQRF
> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ CGEQRF
> include/petscblaslapack_stdcall.h:#  define LAPACKgeqrf_ ZGEQRF
> include/petscblaslapack_stdcall.h:PETSC_EXTERN void PETSC_STDCALL 
> LAPACKgeqrf_(PetscBLASInt*,PetscBLASInt*,PetscScalar*,PetscBLASInt*,PetscScalar*,PetscScalar*,PetscBLASInt*,PetscBLASInt*);
> src/dm/dt/interface/dt.c:  
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(,,A,,tau,work,,));
> src/dm/dt/interface/dtfv.c:  
> LAPACKgeqrf_(,,A,,tau,work,,);
> src/ksp/ksp/impls/gmres/agmres/agmres.c:  
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(, , 
> agmres->hh_origin, , agmres->tau, agmres->work, , ));
> src/ksp/pc/impls/bddc/bddcprivate.c:  
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(_M,_N,qr_basis,_LDA,qr_tau,_work_t,_work,));
> src/ksp/pc/impls/bddc/bddcprivate.c:  
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(_M,_N,qr_basis,_LDA,qr_tau,qr_work,_work,));
> src/ksp/pc/impls/gamg/agg.c:  
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(, , qqc, , TAU, 
> WORK, , ));
> src/tao/leastsquares/impls/pounders/pounders.c:
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(,,mfqP->Q_tmp,,mfqP->tau_tmp,mfqP->mwork,,));
> src/tao/leastsquares/impls/pounders/pounders.c:
> PetscStackCallBLAS("LAPACKgeqrf",LAPACKgeqrf_(,,mfqP->Q,,mfqP->tau,mfqP->mwork,,));



Re: [petsc-dev] SLEPc failure

2017-10-27 Thread Jose E. Roman
I cannot load the files you sent. Please send the matrices in binary format. 
The easiest way is to run your program with -eps_view_mat0 binary:Atau.bin 
-eps_view_mat1 binary:Btau.bin

However, the files are written at the end of EPSSolve() so if the solve fails 
then it will not create the files. You can try running with -eps_max_it 1  or 
add code in your main program to write the matrices.

Jose


> El 27 oct 2017, a las 12:28, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> Maybe could be convenient for the users to have an option (or an EPSSetXXX) 
> to relax that check ?
> Data are attached.
> 
> Franck 
> 
> ----- Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Vendredi 27 Octobre 2017 10:15:44
>> Objet: Re: [petsc-dev] SLEPc failure
>> 
>> There is no new option. What I mean is that from 3.7 to 3.8 we changed the
>> line that produces this error. But it seems that it is still failing in your
>> problem. Maybe your B matrix is indefinite or not exactly symmetric. Can you
>> send me the matrices?
>> Jose
>> 
>>> El 27 oct 2017, a las 9:57, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> I use the development version (bitbucket clone). How to relax the check ?
>>> At command line option ?
>>> 
>>> Franck
>>> 
>>> - Mail original -
>>>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>>>> À: "Franck Houssen" <franck.hous...@inria.fr>
>>>> Cc: "For users of the development version of PETSc"
>>>> <petsc-dev@mcs.anl.gov>
>>>> Envoyé: Jeudi 26 Octobre 2017 18:49:22
>>>> Objet: Re: [petsc-dev] SLEPc failure
>>>> 
>>>> 
>>>>> El 26 oct 2017, a las 18:36, Franck Houssen <franck.hous...@inria.fr>
>>>>> escribió:
>>>>> 
>>>>> Here is a stack I end up with when trying to solve an eigen problem
>>>>> (real,
>>>>> sym, generalized) with SLEPc. My understanding is that, during the Gram
>>>>> Schmidt orthogonalisation, the projection of one basis vector turns out
>>>>> to
>>>>> be null.
>>>>> First, is this correct ? Second, in such cases, are there some
>>>>> recommended
>>>>> "recipe" to test/try (options) to get a clue on the problem ? (I would
>>>>> unfortunately perfectly understand the answer could be no !... As this
>>>>> totally depends on A/B).
>>>>> 
>>>>> With arpack, the eigen problem is solved (so the matrix A and B I use
>>>>> seems
>>>>> to be relevant). But, when I change from arpack to
>>>>> krylovschur/ciss/arnoldi, I get the stack below.
>>>>> 
>>>>> Franck
>>>>> 
>>>>> [0]PETSC ERROR: #1 BV_SafeSqrt()
>>>>> [0]PETSC ERROR: #2 BVNorm_Private()
>>>>> [0]PETSC ERROR: #3 BVNormColumn()
>>>>> [0]PETSC ERROR: #4 BV_NormVecOrColumn()
>>>>> [0]PETSC ERROR: #5 BVOrthogonalizeCGS1()
>>>>> [0]PETSC ERROR: #6 BVOrthogonalizeGS()
>>>>> [0]PETSC ERROR: #7 BVOrthonormalizeColumn()
>>>>> [0]PETSC ERROR: #8 EPSFullLanczos()
>>>>> [0]PETSC ERROR: #9 EPSSolve_KrylovSchur_Symm()
>>>>> [0]PETSC ERROR: #10 EPSSolve()
>>>> 
>>>> Is this with SLEPc 3.8? In SLEPc 3.8 we relaxed this check so I would
>>>> suggest
>>>> trying with it.
>>>> Jose
>>>> 
>>>> 
>> 
>> 
> 



Re: [petsc-dev] SLEPc failure

2017-10-27 Thread Jose E. Roman
There is no new option. What I mean is that from 3.7 to 3.8 we changed the line 
that produces this error. But it seems that it is still failing in your 
problem. Maybe your B matrix is indefinite or not exactly symmetric. Can you 
send me the matrices?
Jose

> El 27 oct 2017, a las 9:57, Franck Houssen <franck.hous...@inria.fr> escribió:
> 
> I use the development version (bitbucket clone). How to relax the check ? At 
> command line option ?
> 
> Franck
> 
> - Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Jeudi 26 Octobre 2017 18:49:22
>> Objet: Re: [petsc-dev] SLEPc failure
>> 
>> 
>>> El 26 oct 2017, a las 18:36, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> Here is a stack I end up with when trying to solve an eigen problem (real,
>>> sym, generalized) with SLEPc. My understanding is that, during the Gram
>>> Schmidt orthogonalisation, the projection of one basis vector turns out to
>>> be null.
>>> First, is this correct ? Second, in such cases, are there some recommended
>>> "recipe" to test/try (options) to get a clue on the problem ? (I would
>>> unfortunately perfectly understand the answer could be no !... As this
>>> totally depends on A/B).
>>> 
>>> With arpack, the eigen problem is solved (so the matrix A and B I use seems
>>> to be relevant). But, when I change from arpack to
>>> krylovschur/ciss/arnoldi, I get the stack below.
>>> 
>>> Franck
>>> 
>>> [0]PETSC ERROR: #1 BV_SafeSqrt()
>>> [0]PETSC ERROR: #2 BVNorm_Private()
>>> [0]PETSC ERROR: #3 BVNormColumn()
>>> [0]PETSC ERROR: #4 BV_NormVecOrColumn()
>>> [0]PETSC ERROR: #5 BVOrthogonalizeCGS1()
>>> [0]PETSC ERROR: #6 BVOrthogonalizeGS()
>>> [0]PETSC ERROR: #7 BVOrthonormalizeColumn()
>>> [0]PETSC ERROR: #8 EPSFullLanczos()
>>> [0]PETSC ERROR: #9 EPSSolve_KrylovSchur_Symm()
>>> [0]PETSC ERROR: #10 EPSSolve()
>> 
>> Is this with SLEPc 3.8? In SLEPc 3.8 we relaxed this check so I would suggest
>> trying with it.
>> Jose
>> 
>> 



Re: [petsc-dev] SLEPc failure

2017-10-26 Thread Jose E. Roman

> El 26 oct 2017, a las 18:36, Franck Houssen  
> escribió:
> 
> Here is a stack I end up with when trying to solve an eigen problem (real, 
> sym, generalized) with SLEPc. My understanding is that, during the Gram 
> Schmidt orthogonalisation, the projection of one basis vector turns out to be 
> null.
> First, is this correct ? Second, in such cases, are there some recommended 
> "recipe" to test/try (options) to get a clue on the problem ? (I would 
> unfortunately perfectly understand the answer could be no !... As this 
> totally depends on A/B).
> 
> With arpack, the eigen problem is solved (so the matrix A and B I use seems 
> to be relevant). But, when I change from arpack to krylovschur/ciss/arnoldi, 
> I get the stack below.
> 
> Franck
> 
> [0]PETSC ERROR: #1 BV_SafeSqrt() 
> [0]PETSC ERROR: #2 BVNorm_Private() 
> [0]PETSC ERROR: #3 BVNormColumn() 
> [0]PETSC ERROR: #4 BV_NormVecOrColumn() 
> [0]PETSC ERROR: #5 BVOrthogonalizeCGS1() 
> [0]PETSC ERROR: #6 BVOrthogonalizeGS() 
> [0]PETSC ERROR: #7 BVOrthonormalizeColumn()
> [0]PETSC ERROR: #8 EPSFullLanczos() 
> [0]PETSC ERROR: #9 EPSSolve_KrylovSchur_Symm() 
> [0]PETSC ERROR: #10 EPSSolve() 

Is this with SLEPc 3.8? In SLEPc 3.8 we relaxed this check so I would suggest 
trying with it.
Jose



Re: [petsc-dev] What is the difference between shift and target in SLEPc ?

2017-09-25 Thread Jose E. Roman
Yes. 

> El 25 sept 2017, a las 15:37, Franck Houssen <franck.hous...@inria.fr> 
> escribió:
> 
> OK, thanks, this is helpful.
> 
> If I got you correctly: beforehand, there is no way to know exactly what the 
> eigen values are. If it turns out that an eigen value makes A-sigma*I or 
> A-sigma*B singular, then the solve may break. If so, afterwards, it's 
> possible to change slightly the shift to avoid solve break down (but there is 
> no way to know that beforehand).
> 
> Franck
> 
> - Mail original -
>> De: "Jose E. Roman" <jro...@dsic.upv.es>
>> À: "Franck Houssen" <franck.hous...@inria.fr>
>> Cc: "For users of the development version of PETSc" <petsc-dev@mcs.anl.gov>
>> Envoyé: Lundi 25 Septembre 2017 14:50:48
>> Objet: Re: [petsc-dev] What is the difference between shift and target in 
>> SLEPc ?
>> 
>> 
>>> El 25 sept 2017, a las 13:21, Franck Houssen <franck.hous...@inria.fr>
>>> escribió:
>>> 
>>> What is the difference between shift and target in SLEPc ? Shift
>>> (STSetShift) is clear to me, but, target (EPSSetTarget) is not.
>>> Can somebody give an example where one want/need to have a target which
>>> would be different from the shift ?
>>> 
>>> Franck
>> 
>> In shift-and-invert the shift is equal to the target by default. The target
>> is what you use to indicate where you want the eigenvalues to be sought (it
>> can be used without shift-and-invert). Normal usage is having both values
>> equal. If the target is exactly equal to an eigenvalue, then you may want to
>> perturb the shift (change it to a slightly different value) in order to
>> avoid a singular matrix A-sigma*I in the linear solves. (Some solvers such
>> as MUMPS do not have problems with singular matrices, so this is not
>> necessary in that case).
>> 
>> Jose
>> 
>> 



Re: [petsc-dev] What is the difference between shift and target in SLEPc ?

2017-09-25 Thread Jose E. Roman

> El 25 sept 2017, a las 13:21, Franck Houssen  
> escribió:
> 
> What is the difference between shift and target in SLEPc ? Shift (STSetShift) 
> is clear to me, but, target (EPSSetTarget) is not.
> Can somebody give an example where one want/need to have a target which would 
> be different from the shift ?
> 
> Franck

In shift-and-invert the shift is equal to the target by default. The target is 
what you use to indicate where you want the eigenvalues to be sought (it can be 
used without shift-and-invert). Normal usage is having both values equal. If 
the target is exactly equal to an eigenvalue, then you may want to perturb the 
shift (change it to a slightly different value) in order to avoid a singular 
matrix A-sigma*I in the linear solves. (Some solvers such as MUMPS do not have 
problems with singular matrices, so this is not necessary in that case).

Jose



Re: [petsc-dev] MatCreateRedundantMatrix() does not work for rectangular matrices

2017-07-27 Thread Jose E. Roman
Works perfectly. Thanks.

> El 27 jul 2017, a las 21:41, Hong <hzh...@mcs.anl.gov> escribió:
> 
> Jose,
> Bug is fixed in branch hzhang/bugfix-RedundantMat/master
> https://bitbucket.org/petsc/petsc/commits/7bbdc51d16de20c2a4daada3a4bf77c9346d6e84
> 
> Let me know if you have any comments.
> 
> Hong
> 
> On Mon, Jul 24, 2017 at 9:02 AM, Hong <hzh...@mcs.anl.gov> wrote:
> This example works well on maint-branch.
> 
> There is a bug in master branch.
> I'll fix it after I'm back from the vacation (Thursday).
> 
> Hong
> 
> 
> On Sat, Jul 22, 2017 at 1:13 PM, Jose E. Roman <jro...@dsic.upv.es> wrote:
> Attached is an example that shows that MatCreateRedundantMatrix() fails if 
> the matrix is rectangular. Is it a bug or an unsupported case?
> 
> Jose
> 
> 
> 



Re: [petsc-dev] GPU regression tests

2017-07-27 Thread Jose E. Roman
Karl,

We have detected another problem. Could you take care of it?
MatDuplicate() does not work for MATSEQAIJCUSPARSE (probably also for 
MATMPIAIJCUSPARSE).
The attached example creates a matrix and duplicates it. There are two cases:

1) With a diagonal matrix it fails on GPU because MatDuplicate() did not copy 
the CUDA-specific data.

$ ./ex_duplicate -diag -mat_type aijcusparse -vec_type cuda 

2) With a fully dense matrix (or any matrix where I-node routines are used), it 
does not fail but operations are done on CPU instead of GPU (because it changes 
the pointers to MatMult_SeqAIJ_Inode etc).

$ ./ex_duplicate -mat_type aijcusparse -vec_type cuda 

Thanks.
Jose


> El 26 jul 2017, a las 21:12, Karl Rupp  escribió:
> 
> Hi Jose,
> 
>> With pull request #719 we have finished a set of fixes to VECCUDA stuff. 
>> With these changes it is now possible to run many tests in SLEPc's testsuite 
>> on GPU (AIJCUSPARSE+VECCUDA). These tests will be included in the nightly 
>> tests from now on.
> 
> great!
> 
>> However, PETSc nightly tests related to VECCUDA are not being run. The 
>> reason is that arch-cuda-double.py and arch-cuda-double.py have 
>> --with-cusp=1 and this option disables VECCUDA code. CUSP tests are separate 
>> from VECCUDA tests.
> 
> As you may remember, I want to get rid of VECCUSP (and if possible also 
> MATAIJCUSP), because the functionality is now provided natively through the 
> CUDA SDK (VECCUDA, MATAIJCUSPARSE). Only the preconditioners from CUSP, most 
> notably PCSACUSP, will stay. This way we can then easily switch over all the 
> tests to VECCUDA.
> 
> 
>> Another thing is that not all VECCUDA tests pass, because of a pending issue 
>> related to MatMultTranspose_MPIAIJCUSPARSE. This was reported last year: 
>> https://bitbucket.org/petsc/petsc/pull-requests/490/gpu-regression-tests
> 
> Alright, thanks for the reminder. Let me get this fixed. :-)
> 
> Best regards,
> Karli


makefile
Description: Binary data


ex_duplicate.c
Description: Binary data


[petsc-dev] GPU regression tests

2017-07-26 Thread Jose E. Roman
Hi.

With pull request #719 we have finished a set of fixes to VECCUDA stuff. With 
these changes it is now possible to run many tests in SLEPc's testsuite on GPU 
(AIJCUSPARSE+VECCUDA). These tests will be included in the nightly tests from 
now on.

However, PETSc nightly tests related to VECCUDA are not being run. The reason 
is that arch-cuda-double.py and arch-cuda-double.py have --with-cusp=1 and this 
option disables VECCUDA code. CUSP tests are separate from VECCUDA tests.

Another thing is that not all VECCUDA tests pass, because of a pending issue 
related to MatMultTranspose_MPIAIJCUSPARSE. This was reported last year: 
https://bitbucket.org/petsc/petsc/pull-requests/490/gpu-regression-tests

Jose



[petsc-dev] MatCreateRedundantMatrix() does not work for rectangular matrices

2017-07-22 Thread Jose E. Roman
Attached is an example that shows that MatCreateRedundantMatrix() fails if the 
matrix is rectangular. Is it a bug or an unsupported case?

Jose



ex207.c
Description: Binary data


Re: [petsc-dev] DMCopyDMSNES

2017-07-20 Thread Jose E. Roman
It is used in SLEPc in code contributed by Fande Kong.
https://bitbucket.org/slepc/slepc/pull-requests/14/example-34-work-with-the-monolithic-update/diff

I have no problem in including the private header, but we in SLEPc always try 
to use public headers only.

Jose


> El 20 jul 2017, a las 20:21, Barry Smith <bsm...@mcs.anl.gov> escribió:
> 
> 
>  It is developer level, so might not belong in the public headers. 
> 
>  Do you need it in the public headers?
> 
> 
>> On Jul 20, 2017, at 11:27 AM, Jose E. Roman <jro...@dsic.upv.es> wrote:
>> 
>> DMCopyDMSNES() is documented
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/DMCopyDMSNES.html
>> 
>> but it is defined in the private header snesimpl.h. Shouldn't it be in the 
>> public header petscsnes.h?
>> 
>> Jose
>> 
> 



[petsc-dev] DMCopyDMSNES

2017-07-20 Thread Jose E. Roman
DMCopyDMSNES() is documented
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/DMCopyDMSNES.html

but it is defined in the private header snesimpl.h. Shouldn't it be in the 
public header petscsnes.h?

Jose



Re: [petsc-dev] [petsc-maint] problems compiling petsc on MAC

2017-05-01 Thread Jose E. Roman

> El 1 may 2017, a las 22:27, Barry Smith  escribió:
> 
>  Is there are reason to have the same symbol in both library (note they come 
> from the fortran versions)
> 
> $ nm -o arch-uni-f2cblaslapack/lib/libf2cblas.a | grep xerbla | grep " T "
> arch-uni-f2cblaslapack/lib/libf2cblas.a:xerbla_array.o:  T 
> _xerbla_array_
> arch-uni-f2cblaslapack/lib/libf2cblas.a:xerbla.o:  T _xerbla_
> ~/Src/petsc (next=) arch-basic
> $ nm -o arch-uni-f2cblaslapack/lib/libf2clapack.a | grep xerbla | grep " T "
> arch-uni-f2cblaslapack/lib/libf2clapack.a:xerbla_array.o:  T 
> _xerbla_array_
> arch-uni-f2cblaslapack/lib/libf2clapack.a:xerbla.o:  T 
> _xerbla_
> 
>  Should we remove them?

I think it can be removed from libf2clapack.a. Not sure if that will have a 
side effect.

Jose



[petsc-dev] Extending MatLRC

2016-11-14 Thread Jose E. Roman
Hi.

I need to work with low-rank matrices represented as the outer product of 
tall-skinny matrices. Specifically, I need to cover these cases:
- Symmetric positive-definite: X*X'
- Symmetric indefinite: X*C*X'
- Non-symmetric: X*Y' (or maybe X*C*Y')

This could be added by extending MATLRC (allowing A to be NULL). If you agree I 
could create a pull request. If you prefer to keep MATLRC as it is now, I could 
add this to SLEPc.

Jose



Re: [petsc-dev] unpleasantness in CUDA tests in master

2016-06-21 Thread Jose E. Roman

> El 21 jun 2016, a las 9:18, Karl Rupp  escribió:
> 
> On 06/21/2016 04:16 AM, Barry Smith wrote:
>> 
>> ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2016/06/20/master.html
> 
> I'll fix this.
> 
> Best regards,
> Karli
> 
> 

Probably some of these are fixed in PR #490  GPU regression tests

Jose



Re: [petsc-dev] PETSc "testing" infrastructure

2016-06-18 Thread Jose E. Roman

> El 17 jun 2016, a las 0:18, Barry Smith  escribió:
> 
> 
>   There is a lot going on currently to enhance the PETSc "testing" 
> infrastructure; in particular Lisandro has begun to set up stuff on both 
> github and bitbucket.
> 
>   I've update the PETSc "Dashboard" for testing at 
> ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/index.html with more links and a 
> bit more context so people can understand it better. I would like links to 
> other high-level packages testing dashboards such as SLEPc so if you know any 
> send them to me.
> 
>   Here "testing" does not just mean running the test suite but also means 
> collecting gcov information, running static analyzers on the code, running 
> with valgrind, controlling symbol visibility and anything else you can think 
> of that helps detect bugs and flaws in the software. For example tools that 
> automatically check that all visible symbols had manual pages and reported 
> problems, manual pages were complete, etc would be good additions. Currently 
> we rely to much on the kindness of strangers who report bugs in our 
> documentation.
> 
>   Comments, input?
> 
>   Barry
> 
> Currently this file is under RCS on the MCS filesystem, if others would like 
> to contribute to it I'll put it under git at bitbucket.
> 

The SLEPc nightly tests can be found here:
  http://slepc.upv.es/buildbot
Go for instance to the "Grid" page and check the last column.
The builds use petsc-master and slepc-next.

One of the builds uses gcov, the result can be seen here:
  http://slepc.upv.es/buildbot/coverage

I plan to clean/improve/add tests in the next months.
Jose



Re: [petsc-dev] Undefined symbol in SLEPc dylib (Haskell bindings, OSX)

2016-06-09 Thread Jose E. Roman
I have no idea why this symbol is not resolved. I have never had a similar 
problem. 

Anyway, if it helps, I have pushed a commit to maint where all occurrences of 
this symbol are removed, since they are not necessary anyway.
https://bitbucket.org/slepc/slepc/commits/b3c04e8

Jose



> El 9 jun 2016, a las 13:06, Matthew Knepley  escribió:
> 
> On Thu, Jun 9, 2016 at 12:01 PM, Marco Zocca  wrote:
> All the KSPConv* symbols are defined (`nm -u ...` shows the blank string):
> 
> $ nm ${PETSC_DIR}/${PETSC_ARCH}/lib/libpetsc.3.7.2.dylib | grep KSPConv
> 
> 0110a652 T _KSPConvergedDefault
> 01109703 T _KSPConvergedDefaultCreate
> 0110b934 T _KSPConvergedDefaultDestroy
> 01109a96 T _KSPConvergedDefaultSetUIRNorm
> 0110a074 T _KSPConvergedDefaultSetUMIRNorm
> 0106533d T _KSPConvergedLSQR
> 01743280 D _KSPConvergedReasons
> 
> Then that symbol should be resolved by this library. It looks like you have a 
> problem in your link line.
> 
>Matt
>  
> 0173d300 s _KSPConvergedReasons_Shifted
> 0110920d T _KSPConvergedSkip
> 01224e94 t _SNES_TR_KSPConverged_Destroy
> 012248f6 t _SNES_TR_KSPConverged_Private
> 
> 
> 
> >>
> >> I encounter this bug when accessing the PETSc and SLEPc dynamic
> >> library under OSX:
> >>
> >> user specified .o/.so/.DLL could not be loaded
> >> (dlopen($SLEPC_DIR/arch-darwin-c-debug/lib/libslepc.dylib, 5): Symbol
> >> not found: _KSPConvergedReasons
> >>   Referenced from: $SLEPC_DIR/arch-darwin-c-debug/lib/libslepc.dylib
> >>   Expected in: flat namespace
> >>  in $SLEPC_DIR/arch-darwin-c-debug/lib/libslepc.dylib)
> >>
> >> Surely, enough, `nm -u` shows _KSPConvergedReasons as an undefined
> >> symbol (see below).
> >> I don't understand the reason of this behaviour since I first compile
> >> with all the relevant PETSc and SLEPc headers and link against both
> >> .dylibs.
> >>
> >> Thank you in advance for any pointers,
> >> Marco
> >>
> >> ---
> >> $ nm -u libslepc.3.7.1.dylib | grep KSP
> >
> >
> > What do you get for
> >
> >   nm -u libpetsc.3.7.1.dylib | grep KSPConv
> >
> >  Matt
> >
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener



[petsc-dev] GPU regression tests

2016-05-24 Thread Jose E. Roman
Hi.

We recently noticed that MatMultTranspose_MPIAIJCUSPARSE does not work. We 
tried to debug it but it is difficult because we don't know the internal 
details of Mat.

There might be other cases. It would be convenient to add nightly tests for 
VECCUDA and MATAIJCUSPARSE (in real and complex scalars). If someone enables 
TESTEXAMPLES_CUDA and TESTEXAMPLES_CUDA_COMPLEX (in TEST_RUNS by updating 
configureRegression) we could add a few simple regression tests.

Jose



Re: [petsc-dev] stability issue

2016-05-04 Thread Jose E. Roman

> El 4 may 2016, a las 7:35, Vasiliy Kozyrev  escribió:
> 
> Hi
> 
> I have an issue with solution stability(small variations in
> my matrixes causes significant variations in the solution).
> It looks like a feature of my eigen value problem, because 
> for other problems everything works fine.
> 
> Are there any options in the SLEPc which could be helpful 
> in such situations?
> 
> 
> Vasily Kozyrev

This question is too generic. What is the problem? Are you getting large 
residual errors?

Jose



[petsc-dev] Support for complex scalars in VECCUDA

2016-04-05 Thread Jose E. Roman
Hi Karl,

We would like to add support for complex scalars in VECCUDA and MATAIJCUSPARSE. 
This is almost finished and we plan to create a new pull request for this. Is 
PR #421 ready to merge?

Jose



Re: [petsc-dev] [petsc-checkbuilds] PETSc blame digest (next) 2016-03-11

2016-03-11 Thread Jose E. Roman

> El 11 mar 2016, a las 17:28, Satish Balay  escribió:
> 
> There is one more broken build due to cuda changes.
> 
> 
> http://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2016/03/11/make_next_arch-linux-pkgs-dbg-ftn-interfaces_crank.log
> 
 
> /sandbox/petsc/petsc.clone-3/include/petsc/finclude/ftn-auto/petscvec.h90:740.52:
>Included at 
> /sandbox/petsc/petsc.clone-3/include/petsc/finclude/petscvec.h90:9:
>Included at /sandbox/petsc/petsc.clone-3/src/vec/f90-mod/petscvecmod.F:25:
> 
>  subroutine VecScatterInitializeForGPU(a,b,c,z)
>1
> Error: Symbol 'vecscatterinitializeforgpu' at (1) already has an explicit 
> interface
> 
> 
> I see there are 2 impls of this function each in - resulting in 2
> 'intercace' definitions generated - causing this error.
> 
> src/vec/vec/utils/veccusp/vscatcusp.c
> src/vec/vec/utils/veccuda/vscatcuda.c
> 
> Perhaps they shoud be merged into a single one?
> 
> Satish

Yes, we will merge them in src/vec/vec/utils/vscat.c

Jose



Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-03-10 Thread Jose E. Roman

> El 10 mar 2016, a las 11:21, Karl Rupp  escribió:
> 
> Great! I'm looking forward to reviewing your pull request. Let me know if you 
> need support with the Mat part.
> 
> Best regards,
> Karli

The pull request:
https://bitbucket.org/petsc/petsc/pull-requests/421/



Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-03-10 Thread Jose E. Roman

> El 10 mar 2016, a las 10:10, Karl Rupp  escribió:
> 
> Hi Jose and Alejandro,
> 
> how's your current progress/status? It looks like I'm able to spend some time 
> on this and can get this done by early next week. On the other hand, if 
> you've finished all the relevant parts you required, I will refrain on 
> duplicating the work.
> 
> Best regards,
> Karli

We are done with the Vec part. We will create a pull request today to start 
discussion, and then continue with the part related to Mat.

Jose



Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-02-28 Thread Jose E. Roman

> El 28 feb 2016, a las 10:45, Karl Rupp  escribió:
> 
> Hi,
> 
>> I like the idea of having separate VECCUDA and VECVIENNACL, because it is 
>> possible to implement VECCUDA without dependence on a C++ compiler (only the 
>> CUDA compiler).
> 
> I don't understand this part. NVCC also requires a C++ host compiler and is 
> fairly picky about the supported compilers.

You are right. I was thinking of the case when one has a pure C code and wants 
to use a --with-language=C PETSc configuration.

> 
> 
>> If you want, we can prepare a rough initial implementation of VECCUDA in the 
>> next days, and we can later discuss what to keep/discard.
> 
> Any contributions are welcome :-)
> 
> 
>> Karl: regarding the time constraints, our idea is to present something at a 
>> conference this summer, and deadlines are approaching.
> 
> Ok, this is on fairly short notice considering the changes required. I 
> recommend to start with copying the CUSP sources and migrate it over to 
> VECCUDA by replacing any use of cusp::array1d to a raw CUDA handle. 
> Operations from CUSP should be replaced by CUBLAS calls.

Ok. Will start work on this.

Jose

> 
> Best regards,
> Karli
> 



Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-02-26 Thread Jose E. Roman

> El 26 feb 2016, a las 18:31, Dominic Meiser  escribió:
> 
> On Fri, Feb 26, 2016 at 02:49:39PM +0100, Karl Rupp wrote:
>> 
 The alternative would be to use raw cuda pointers instead of cusp
 arrays for GPU memory in VecCUSP.  That would be a fairly
 significant undertaking (certainly more than the 2-3 weeks Karli
 is estimating for getting the ViennaCL cuda backend in).
>>> 
>>> Do you mean creating a new class VECCUDA in addition to VECCUSP and 
>>> VECVIENNACL? This could be a solution for us. It would mean maybe 
>>> refactoring MATAIJCUSPARSE to work with these new Vecs?
>> 
>> I prefer to replace VECCUSP with e.g. VECCUDA (and eventually also
>> rename VECCUSPARSE to VECCUDA to have a unified naming for all the
>> things provided natively with the CUDA SDK) in order to reduce
>> external dependencies. CUSP will provide matrices, preconditioners,
>> etc. as before, but is only optional and thus less likely to cause
>> installation troubles. Supporting VECCUSP and VECCUDA next to each
>> other is going to be too much code duplication without any benefit.
> 
> That makes sense.  At the Vec level we should be using a low
> level construct (i.e. cuda raw pointers) because clients can
> always provide raw pointers and they know how to consume them
> (e.g. if they want to use cusp vectors on their end).
> 
>> 
>> Even if we do provide VECCUDA, I still dislike the fact that we
>> would have to maintain essentially the same code twice: One for
>> CUDA, one for OpenCL/ViennaCL. With the ViennaCL bindings providing
>> OpenMP and CUDA support soon, this also duplicates functionality. A
>> possible 'fix' is to just use ViennaCL for CUDA+OpenCL+OpenMP and
>> thus only maintain a single PETSc plugin for all three. However, I'm
>> certainly too biased to be taken seriously here.
> 
> I agree with this in principle.  Perhaps it's time to consolidate
> the cuda/cusp/cusparse/opencl efforts.  Note however that
> MATAIJCUSPARSE provides capabilities that won't be available
> right away with ViennaCL (e.g. multi-GPU block Jacobi and ASM
> preconditioners).

I like the idea of having separate VECCUDA and VECVIENNACL, because it is 
possible to implement VECCUDA without dependence on a C++ compiler (only the 
CUDA compiler).

If you want, we can prepare a rough initial implementation of VECCUDA in the 
next days, and we can later discuss what to keep/discard.

Karl: regarding the time constraints, our idea is to present something at a 
conference this summer, and deadlines are approaching.


> 
> Cheers,
> Dominic
> 
> 
>> 
>>> 
>>> If there is interest we can help in adding this stuff.
>> 
>> What are your time constraints?
>> 
>> Best regards,
>> Karli
>> 
>> 
> 
> -- 
> Dominic Meiser
> Tech-X Corporation - 5621 Arapahoe Avenue - Boulder, CO 80303



Re: [petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-02-26 Thread Jose E. Roman

> El 25 feb 2016, a las 17:19, Dominic Meiser <dmei...@txcorp.com> escribió:
> 
> On Thu, Feb 25, 2016 at 01:13:01PM +0100, Jose E. Roman wrote:
>> We are trying to do some GPU developments on the SLEPc side, and we would 
>> need a way of placing the array of a VECCUSP vector, providing the GPU 
>> address. Specifically, what we want to do is have a large Vec on GPU and 
>> slice it in several smaller Vecs.
>> 
>> For the GetArray/RestoreArray we have all possibilities:
>> - VecGetArray: gets the pointer to the buffer stored in CPU memory
>> - VecCUSPGetArray*: returns a CUSPARRAY object that contains some info, 
>> including the buffer allocated in GPU memory
>> - VecCUSPGetCUDAArray*: returns a raw pointer of the GPU buffer
>> 
>> The problem comes with PlaceArray equivalents. Using VecPlaceArray we can 
>> provide a new pointer to CPU memory. We wanted to implement the equivalent 
>> thing for GPU, but we found difficulties due to Thrust. If we wanted to 
>> provide a VecCUSPPlaceCUDAArray the problem is that Thrust does not allow 
>> wrapping an exisiting GPU buffer with a CUSPARRAY object (when creating a 
>> CUSPARRAY it always allocates new memory). On the other hand, 
>> VecCUSPPlaceArray is possible to implement, but the problem is that one 
>> should provide a CUSPARRAY obtained from a VecCUSPGetArray* without 
>> modification (it is not possible to do pointer arithmetic with a CUSPARRAY).
>> 
>> Any thoughts?
>> 
> 
> I think your and Karli's analysis is correct, this is currently
> not supported.  Besides Karli's proposal to use ViennaCL's cuda
> backend a different option might be to use cusp's array views.
> These have a constructor for sub-ranges of other cusp arrays:
> 
> https://github.com/cusplibrary/cusplibrary/blob/master/cusp/array1d.h#L409
> 
> However, enabling cusp array views in something like
> VecCUSPPlaceArray is not immediately possible.  The CUSPARRAY
> type, which is currently hardwired to be
> cusp::array1d<PetscScalar,cusp::device_memory>, would have to
> become a template parameter.  I'm not sure if we want to go down
> that path.

Yes, we do not like this.

> 
> The alternative would be to use raw cuda pointers instead of cusp
> arrays for GPU memory in VecCUSP.  That would be a fairly
> significant undertaking (certainly more than the 2-3 weeks Karli
> is estimating for getting the ViennaCL cuda backend in).

Do you mean creating a new class VECCUDA in addition to VECCUSP and 
VECVIENNACL? This could be a solution for us. It would mean maybe refactoring 
MATAIJCUSPARSE to work with these new Vecs?

If there is interest we can help in adding this stuff.


> 
> Cheers,
> Dominic
> 
> -- 
> Dominic Meiser
> Tech-X Corporation - 5621 Arapahoe Avenue - Boulder, CO 80303



[petsc-dev] Not possible to do a VecPlaceArray for veccusp

2016-02-25 Thread Jose E. Roman
We are trying to do some GPU developments on the SLEPc side, and we would need 
a way of placing the array of a VECCUSP vector, providing the GPU address. 
Specifically, what we want to do is have a large Vec on GPU and slice it in 
several smaller Vecs.

For the GetArray/RestoreArray we have all possibilities:
- VecGetArray: gets the pointer to the buffer stored in CPU memory
- VecCUSPGetArray*: returns a CUSPARRAY object that contains some info, 
including the buffer allocated in GPU memory
- VecCUSPGetCUDAArray*: returns a raw pointer of the GPU buffer

The problem comes with PlaceArray equivalents. Using VecPlaceArray we can 
provide a new pointer to CPU memory. We wanted to implement the equivalent 
thing for GPU, but we found difficulties due to Thrust. If we wanted to provide 
a VecCUSPPlaceCUDAArray the problem is that Thrust does not allow wrapping an 
exisiting GPU buffer with a CUSPARRAY object (when creating a CUSPARRAY it 
always allocates new memory). On the other hand, VecCUSPPlaceArray is possible 
to implement, but the problem is that one should provide a CUSPARRAY obtained 
from a VecCUSPGetArray* without modification (it is not possible to do pointer 
arithmetic with a CUSPARRAY).

Any thoughts?



Re: [petsc-dev] [SLEPc] For users of PETSc master branch, API change

2015-11-10 Thread Jose E. Roman
It seems that you are compiling against the old PETSc. You have to update both 
PETSc and SLEPc.
Jose


> El 10/11/2015, a las 10:04, Leoni, Massimiliano 
> <massimiliano.le...@rolls-royce.com> escribió:
> 
> Jose, I see the commit but I still get a compilation error.
> 
> I attach the logs. It looks like there is a definition that was not updated.
> 
> Best,
> 
> Massimiliano
> 
> 
>> -Original Message-
>> From: Jose E. Roman [mailto:jro...@dsic.upv.es]
>> Sent: 09 November 2015 14:48
>> To: petsc-users
>> Cc: Leoni, Massimiliano; petsc-dev
>> Subject: Re: [petsc-dev] [SLEPc] For users of PETSc master branch, API
>> change
>> 
>> The fix is already in SLEPc's branches 'jose/sync-with-petsc' and 'next'.
>> Will merge into 'master' tomorrow.
>> 
>> Jose
>> 
>> 
>>> El 9/11/2015, a las 15:44, Satish Balay <ba...@mcs.anl.gov> escribió:
>>> 
>>> you can try using a slightly older 'master' snapshot' [until you get
>>> the slpec fix]
>>> 
>>> For eg:
>>> git checkout d916695f21d798ebdf80dc439ef54c5223c9183c
>>> 
>>> And once the slepc fix is available - you can do:
>>> git checkout master
>>> git pull
>>> 
>>> Satish
>>> 
>>> On Mon, 9 Nov 2015, Leoni, Massimiliano wrote:
>>> 
>>>> Ok, sorry!
>>>> It looks like I chose the worst possible day to update :D
>>>> 
>>>> Best,
>>>> 
>>>> Massimiliano
>>>> 
>>>>> -Original Message-
>>>>> From: Jose E. Roman [mailto:jro...@dsic.upv.es]
>>>>> Sent: 09 November 2015 14:26
>>>>> To: Leoni, Massimiliano
>>>>> Cc: Barry Smith; PETSc; petsc-dev
>>>>> Subject: Re: [petsc-dev] [SLEPc] For users of PETSc master branch,
>>>>> API change
>>>>> 
>>>>> Working on it. Be patient. Should be available on master tomorrow.
>>>>> Jose
>>>>> 
>>>>> 
>>>>> 
>>>>>> El 9/11/2015, a las 15:23, Leoni, Massimiliano
>>>>>> <Massimiliano.Leoni@Rolls-
>>>>> Royce.com> escribió:
>>>>>> 
>>>>>> Is there a branch in the SLEPc repo that supports this?
>>>>>> 
>>>>>> Massimiliano
>>>>>> 
>>>>>>> -Original Message-
>>>>>>> From: petsc-dev-boun...@mcs.anl.gov [mailto:petsc-dev-
>>>>>>> boun...@mcs.anl.gov] On Behalf Of Barry Smith
>>>>>>> Sent: 09 November 2015 00:21
>>>>>>> To: PETSc; petsc-dev
>>>>>>> Subject: [petsc-dev] For users of PETSc master branch, API change
>>>>>>> 
>>>>>>> 
>>>>>>> For users of the PETSc master branch.
>>>>>>> 
>>>>>>> I have pushed into master some API changes for the
>>>>>>> PetscOptionsGetXXX() and related routines. The first argument is
>>>>>>> now a PetscOptions object, which is optional, if you pass a NULL
>>>>>>> in for the first argument (or a PETSC_NULL_OBJECT in Fortran) you
>>>>>>> will retain the same functionality as you had previously.
>>>>>>> 
>>>>>>> Barry
>>>>>> 
>>>>>> The data contained in, or attached to, this e-mail, may contain
>>>>>> confidential
>>>>> information. If you have received it in error you should notify the
>>>>> sender immediately by reply e-mail, delete the message from your
>>>>> system and contact +44 (0) 3301235850 (Security Operations Centre)
>>>>> if you need assistance. Please do not copy it for any purpose, or
>>>>> disclose its contents to any other person.
>>>>>> 
>>>>>> An e-mail response to this address may be subject to interception
>>>>>> or
>>>>> monitoring for operational reasons or for lawful business practices.
>>>>>> 
>>>>>> (c) 2015 Rolls-Royce plc
>>>>>> 
>>>>>> Registered office: 62 Buckingham Gate, London SW1E 6AT Company
>>>>> number: 1003142. Registered in England.
>>>>>> 
>>>> 
>>>> The data contained in, or attached to, this e-mail, may contain 
>>>> confidential
>> information. If you have received it in error you should notify the sender
>> immediately by reply e-mail, delete the message from your system and
>> contact +44 (0) 3301235850 (Security Operations Centre) if you need
>> assistance. Please do not copy it for any purpose, or disclose its contents 
>> to
>> any other person.
>>>> 
>>>> An e-mail response to this address may be subject to interception or
>> monitoring for operational reasons or for lawful business practices.
>>>> 
>>>> (c) 2015 Rolls-Royce plc
>>>> 
>>>> Registered office: 62 Buckingham Gate, London SW1E 6AT Company
>> number: 1003142. Registered in England.
>>>> 
> 
> The data contained in, or attached to, this e-mail, may contain confidential 
> information. If you have received it in error you should notify the sender 
> immediately by reply e-mail, delete the message from your system and contact 
> +44 (0) 3301235850 (Security Operations Centre) if you need assistance. 
> Please do not copy it for any purpose, or disclose its contents to any other 
> person.
> 
> An e-mail response to this address may be subject to interception or 
> monitoring for operational reasons or for lawful business practices.
> 
> (c) 2015 Rolls-Royce plc
> 
> Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 
> 1003142. Registered in England.
> 



Re: [petsc-dev] [SLEPc] For users of PETSc master branch, API change

2015-11-09 Thread Jose E. Roman
The fix is already in SLEPc's branches 'jose/sync-with-petsc' and 'next'.
Will merge into 'master' tomorrow.

Jose


> El 9/11/2015, a las 15:44, Satish Balay <ba...@mcs.anl.gov> escribió:
> 
> you can try using a slightly older 'master' snapshot' [until you get
> the slpec fix]
> 
> For eg:
> git checkout d916695f21d798ebdf80dc439ef54c5223c9183c
> 
> And once the slepc fix is available - you can do:
> git checkout master
> git pull
> 
> Satish
> 
> On Mon, 9 Nov 2015, Leoni, Massimiliano wrote:
> 
>> Ok, sorry!
>> It looks like I chose the worst possible day to update :D
>> 
>> Best,
>> 
>> Massimiliano
>> 
>>> -Original Message-
>>> From: Jose E. Roman [mailto:jro...@dsic.upv.es]
>>> Sent: 09 November 2015 14:26
>>> To: Leoni, Massimiliano
>>> Cc: Barry Smith; PETSc; petsc-dev
>>> Subject: Re: [petsc-dev] [SLEPc] For users of PETSc master branch, API
>>> change
>>> 
>>> Working on it. Be patient. Should be available on master tomorrow.
>>> Jose
>>> 
>>> 
>>> 
>>>> El 9/11/2015, a las 15:23, Leoni, Massimiliano <Massimiliano.Leoni@Rolls-
>>> Royce.com> escribió:
>>>> 
>>>> Is there a branch in the SLEPc repo that supports this?
>>>> 
>>>> Massimiliano
>>>> 
>>>>> -Original Message-
>>>>> From: petsc-dev-boun...@mcs.anl.gov [mailto:petsc-dev-
>>>>> boun...@mcs.anl.gov] On Behalf Of Barry Smith
>>>>> Sent: 09 November 2015 00:21
>>>>> To: PETSc; petsc-dev
>>>>> Subject: [petsc-dev] For users of PETSc master branch, API change
>>>>> 
>>>>> 
>>>>>  For users of the PETSc master branch.
>>>>> 
>>>>>  I have pushed into master some API changes for the
>>>>> PetscOptionsGetXXX() and related routines. The first argument is now
>>>>> a PetscOptions object, which is optional, if you pass a NULL in for
>>>>> the first argument (or a PETSC_NULL_OBJECT in Fortran) you will
>>>>> retain the same functionality as you had previously.
>>>>> 
>>>>>  Barry
>>>> 
>>>> The data contained in, or attached to, this e-mail, may contain 
>>>> confidential
>>> information. If you have received it in error you should notify the sender
>>> immediately by reply e-mail, delete the message from your system and
>>> contact +44 (0) 3301235850 (Security Operations Centre) if you need
>>> assistance. Please do not copy it for any purpose, or disclose its contents 
>>> to
>>> any other person.
>>>> 
>>>> An e-mail response to this address may be subject to interception or
>>> monitoring for operational reasons or for lawful business practices.
>>>> 
>>>> (c) 2015 Rolls-Royce plc
>>>> 
>>>> Registered office: 62 Buckingham Gate, London SW1E 6AT Company
>>> number: 1003142. Registered in England.
>>>> 
>> 
>> The data contained in, or attached to, this e-mail, may contain confidential 
>> information. If you have received it in error you should notify the sender 
>> immediately by reply e-mail, delete the message from your system and contact 
>> +44 (0) 3301235850 (Security Operations Centre) if you need assistance. 
>> Please do not copy it for any purpose, or disclose its contents to any other 
>> person.
>> 
>> An e-mail response to this address may be subject to interception or 
>> monitoring for operational reasons or for lawful business practices.
>> 
>> (c) 2015 Rolls-Royce plc
>> 
>> Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 
>> 1003142. Registered in England.
>> 



Re: [petsc-dev] [SLEPc] For users of PETSc master branch, API change

2015-11-09 Thread Jose E. Roman
Working on it. Be patient. Should be available on master tomorrow.
Jose



> El 9/11/2015, a las 15:23, Leoni, Massimiliano 
>  escribió:
> 
> Is there a branch in the SLEPc repo that supports this?
> 
> Massimiliano
> 
>> -Original Message-
>> From: petsc-dev-boun...@mcs.anl.gov [mailto:petsc-dev-
>> boun...@mcs.anl.gov] On Behalf Of Barry Smith
>> Sent: 09 November 2015 00:21
>> To: PETSc; petsc-dev
>> Subject: [petsc-dev] For users of PETSc master branch, API change
>> 
>> 
>>   For users of the PETSc master branch.
>> 
>>   I have pushed into master some API changes for the PetscOptionsGetXXX()
>> and related routines. The first argument is now a PetscOptions object, which
>> is optional, if you pass a NULL in for the first argument (or a
>> PETSC_NULL_OBJECT in Fortran) you will retain the same functionality as you
>> had previously.
>> 
>>   Barry
> 
> The data contained in, or attached to, this e-mail, may contain confidential 
> information. If you have received it in error you should notify the sender 
> immediately by reply e-mail, delete the message from your system and contact 
> +44 (0) 3301235850 (Security Operations Centre) if you need assistance. 
> Please do not copy it for any purpose, or disclose its contents to any other 
> person.
> 
> An e-mail response to this address may be subject to interception or 
> monitoring for operational reasons or for lawful business practices.
> 
> (c) 2015 Rolls-Royce plc
> 
> Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 
> 1003142. Registered in England.
> 



Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working with GPU and default configurations

2015-08-11 Thread Jose E. Roman

 El 11/8/2015, a las 12:17, Leoni, Massimiliano 
 massimiliano.le...@rolls-royce.com escribió:
 
 Jose,
 
 I have a doubt I made myself unclear earlier: when I said the GPU version was 
 slower than the CPU version, I meant single GPU vs single CPU multithreaded 
 [i.e. 12 threads].
 
 The single GPU version is, at the moment, performing slightly better than the 
 serial [1 CPU with one thread] version.
 For example, I ran my code reading a 4x400 matrix I created sampling from 
 a function [a sum of sines with different periods].
 The average execution time on a single CPU is 13.6s, the one on a single GPU 
 is 8.4s; these are similar to the ones I get running the hands-on exercise on 
 SVD out-of-the-box [accordingly to the fact that this portion of my code 
 follows the outline of that example].
 
 I am running on what I think is an optimised build, here are my configure 
 options:
 PETSC_ARCH=linux-gpu-optimised
 --with-clanguage=c++
 --COPTFLAGS=-O3
 --CXXOPTFLAGS=-O3
 --CUDAOPTFLAGS=-O3
 --FOPTFLAGS=-O3
 --with-debugging=no
 --with-log=1
 --with-blas-lapack-dir=/opt/intel/mkl/
 --with-mpi-dir=/path/to/openmpi-1.8.6-gcc
 --with-openmp=1
 --with-hdf5-dir=/path/to/hdf5-1.8.15-patch1/
 --with-cuda=1
 --with-cuda-dir=/path/to/cuda-7.0
 --CUDAC=/path/to/nvcc
 --with-cusp=1
 --with-cusp-dir=/path/to/cusplibrary
 --with-cgns-dir=/path/to/CGNS/
 --with-cmake-dir=/path/to/cmake-3.2.3-Linux-x86_64/
 
 Addressing the other point you raised: I am not scared of low-level 
 programming, but I have quite a tight deadline to present results.
 
 Best,
 
 Massimiliano

Yes, seems ok. For the shell matrix version, you have to take care of memory 
management for the matrix at the GPU, and then use VecCUSPGetArrayRead, 
VecCUSPGetArrayWrite and VecCUSPGetCUDAArray to obtain the GPU pointer 
corresponding to the vectors x,y of the operation y=A*x.

Jose




Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working with GPU and default configurations

2015-08-10 Thread Jose E. Roman
Massimiliano,

You should not be getting slower times on the GPU. I tried with a hardware 
similar to what you mention, running SVD on a dense square matrix stored as 
aij, and also with sparse rectangular matrices. In all cases, executions on the 
GPU were roughly 2x faster than on the CPU. Are you running with an optimized 
build? There might be something wrong with your code. I would need to know the 
exact options that you are using. Maybe you can share your code with us, or 
even the matrix.

For the case of a dense matrix, one could create a customized shell matrix that 
stores data on the GPU and uses cuBLAS for the matrix-vector product. We have 
recently done this on a different problem and results were quite good. However, 
it is much more low-level programming compared to just setting AIJCUSP type for 
the matrix.

Jose



 El 10/8/2015, a las 15:55, Leoni, Massimiliano 
 massimiliano.le...@rolls-royce.com escribió:
 
  -Original Message-
  From: Karl Rupp [mailto:r...@iue.tuwien.ac.at]
  Sent: 10 August 2015 14:13
  To: Leoni, Massimiliano
  Cc: slepc-ma...@upv.es; petsc-dev@mcs.anl.gov
  Subject: Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working
  with GPU and default configurations
  
  Maybe you forgot to call SlepcFinalize()?
 Unfortunately it's not it, if I omit SlepcFinalize() an error message shows 
 up at runtime to remind me.
  
  Ok, this is actually a relatively GPU-friendly setup, because CPUs have
  reduced the gap in terms of FLOPs quite a bit (see for example
  http://www.karlrupp.net/2013/06/cpu-gpu-and-mic-hardware-
  characteristics-over-time/  )
 Read, thanks for sharing!
  I'd suggest to convince your supervisor into buying/using a cluster with
  current hardware and enjoy a higher speedup compared to what you could
  get in an ideal setting with a GPU from 2010 anyway ;-)
 This could partly be overcome as I was told I *might*, eventually, have 
 access to a big cluster with many NVIDIA Tesla K20.
  
  (Having said that, I carefully estimate that you can get some
  performance gains for SVD if you deep-dive into the existing SVD
  implementation, carefully redesign it to minimize CPU-GPU
  communication, and use optimized library routines from the BLAS 3
  operations. Currently there is not enough GPU-infrastructure in PETSc to
  achieve this via command line parameters only.)
 Mmm, can you give a rough estimate of the effort involved in this?
  
  
 From: Matthew Knepley [mailto:knep...@gmail.com] 
 Sent: 10 August 2015 14:28
 To: Leoni, Massimiliano
 Cc: Karl Rupp; slepc-ma...@upv.es; petsc-dev@mcs.anl.gov
 Subject: Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working 
 with GPU and default configurations
  
 Try calling PetscLogBegin() after PetscInitialize(). We have now put in an 
 error if this is not initialized correctly.
 This didn’t do the trick, unfortunately 
 Do I have to pull from the repo and rebuild?
  
 [In general, can I pull and rebuild without running configure again?]
  
 I agree with Karl that not much speedup can be expected with GPUs. This is 
 the fault of dishonest marketing. None
 of the computations in PETSc are limited by the computation rate, rather 
 they are limited by memory bandwidth. The
 bandwidth is at best 2-3x better, and less for modern CPUs. The dense SVD 
 can be better than this, but you are
 eventually limited by offload times and memory latency. The story of 100x, 
 or even 10x, speedups is just a fraud.
 I remember reading this in one of the petsc reports [the “Preliminary 
 evaluation” one?].
 I’ll see what I can do
  
 Best regards,
 Massimiliano
  
 
 The data contained in, or attached to, this e-mail, may contain confidential 
 information. If you have received it in error you should notify the sender 
 immediately by reply e-mail, delete the message from your system and contact 
 +44 (0) 1332 622800(Security Operations Centre) if you need assistance. 
 Please do not copy it for any purpose, or disclose its contents to any other 
 person.
 
 An e-mail response to this address may be subject to interception or 
 monitoring for operational reasons or for lawful business practices.
 
 (c) 2015 Rolls-Royce plc
 
 Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 
 1003142. Registered in England.
 



Re: [petsc-dev] [GPU - slepc] Hands-on exercise 4 (SVD) not working with GPU and default configurations

2015-08-07 Thread Jose E. Roman
Yes, there seems to be a problem with the default SVD solver (SVDCROSS). I will 
fix it in the master branch in the next days. Meanwhile, you can run the 
example with -svd_type trlanczos

Thanks for reporting this.
Jose


 El 7/8/2015, a las 16:31, Leoni, Massimiliano 
 massimiliano.le...@rolls-royce.com escribió:
 
 Hi everybody!
  
 I kept experimenting with slepc and GPUs, and when I turned to SVD I found 
 out that the hands-on exercise on SVD [#4] doesn’t run properly.
  
 If I run it on CPU it works fine, whereas…
 $ mpirun -np 1 slepcSVD -file 
 $SLEPC_DIR/share/slepc/datafiles/matrices/rdb200.petsc -bv_type vecs 
 -mat_type aijcusp -on_error_abort
  
 Singular value problem stored in file.
  
 Reading REAL matrix from a binary file...
 [0]PETSC ERROR: BVScaleColumn() line 380 in 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/sys/classes/bv/interface/bvops.c 
 Scalar value must be same on all processes, argument # 3
 [0]PETSC ERROR: 
 
 [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
 probably memory access out of range
 [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
 [0]PETSC ERROR: or see 
 http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
 [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to 
 find memory corruption errors
 [0]PETSC ERROR: likely location of problem given in stack below
 [0]PETSC ERROR: -  Stack Frames 
 
 [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
 [0]PETSC ERROR:   INSTEAD the line number of the start of the function
 [0]PETSC ERROR:   is given.
 [0]PETSC ERROR: [0] PetscAbortErrorHandler line 56 
 /gpfs/rrcfd/rruk-students/apps/petsc/src/sys/error/errabort.c
 [0]PETSC ERROR: [0] PetscError line 363 
 /gpfs/rrcfd/rruk-students/apps/petsc/src/sys/error/err.c
 [0]PETSC ERROR: [0] BVScaleColumn line 377 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/sys/classes/bv/interface/bvops.c
 [0]PETSC ERROR: [0] EPSFullLanczos line 357 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/eps/impls/krylov/epskrylov.c
 [0]PETSC ERROR: [0] EPSSolve_KrylovSchur_Symm line 41 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/eps/impls/krylov/krylovschur/ks-symm.c
 [0]PETSC ERROR: [0] EPSSolve line 83 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/eps/interface/epssolve.c
 [0]PETSC ERROR: [0] SVDSolve_Cross line 155 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/svd/impls/cross/cross.c
 [0]PETSC ERROR: [0] SVDSolve line 92 
 /gpfs/rrcfd/rruk-students/apps/slepc/src/svd/interface/svdsolve.c
 [0]PETSC ERROR: User provided function() line 0 in  unknown file (null)
 --
 mpirun noticed that process rank 0 with PID 11264 on node gpu3 exited on 
 signal 11 (Segmentation fault).
 --
  
 I am using the same command line options that work just fine on hands-on 
 exercises 1 and 2, which feature EPS solvers.
  
 Any hint appreciated.
  
 Best regards,
 Massimiliano Leoni
  
 
 The data contained in, or attached to, this e-mail, may contain confidential 
 information. If you have received it in error you should notify the sender 
 immediately by reply e-mail, delete the message from your system and contact 
 +44 (0) 1332 622800(Security Operations Centre) if you need assistance. 
 Please do not copy it for any purpose, or disclose its contents to any other 
 person.
 
 An e-mail response to this address may be subject to interception or 
 monitoring for operational reasons or for lawful business practices.
 
 (c) 2015 Rolls-Royce plc
 
 Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 
 1003142. Registered in England.
 



Re: [petsc-dev] [SLEPc - GPU] Problems running SLEPc on GPUs

2015-07-24 Thread Jose E. Roman

 El 24/7/2015, a las 10:09, Leoni, Massimiliano 
 massimiliano.le...@rolls-royce.com escribió:
 
 Hi everybody,
  
 I have recently being trying to run SLEPc on GPUs, but I am experiencing some 
 trouble.
 I think I correctly installed PETSc to run on GPUs, as I can actually see an 
 execution time difference in PETSc programs when run with or without 
 –vec_type cusp –mat_type aijcusp.
 When I try to do the same with SLEPc, I get the following error [running 
 hands-on exercise 2]
 [0]PETSC ERROR: No support for this operation for this object type
 [0]PETSC ERROR: Cannot create a BVSVEC from a non-standard template vector
  
 From the documentation I understood this BVSVEC is a base SLEPc type, so I am 
 not sure how to interpret this [wouldn’t know what to touch at high-level].
 Can anyone please point me to some examples/documentation on this?
  
 Thanks in advance
  
 Massimiliano Leoni
 

As you can seen in section 8.3 of SLEPc’s users manual, the command-line 
example adds -bv_type vecs, otherwise you get this error. In the short term we 
plan to add GPU support for the default BV, but for the moment you have to use 
this BV type.

We would be interested in knowing how this works for you, send feedback to 
slepc-maint.

Thanks.
Jose



[petsc-dev] PetscCheckSameType

2015-04-24 Thread Jose E. Roman

Shouldn't PetscCheckSameType compare type_name instead of type?


#define PetscCheckSameType(a,arga,b,argb) \
  if (((PetscObject)a)-type != ((PetscObject)b)-type) 
SETERRQ2(PETSC_COMM_SELF,PETSC_ERR_ARG_NOTSAMETYPE,Objects not of same type: 
Argument # %d and %d,arga,argb);



Re: [petsc-dev] CUDA 7

2015-04-21 Thread Jose E. Roman
Thanks. It works now.
Jose

El 21/04/2015, a las 08:51, Steven Dalton escribió:

 Hello Jose,
 
   I pushed a release candidate that should resolve your errors. Please try 
 compiling using branch release/0.5.1 of the CUSP repo. Once I've verified 
 compiling on VS2013 is fully operational I will bag and tag v0.5.1.
 
 Thanks,
   Steve
 
 On Tue, Apr 14, 2015 at 10:32 AM, Steven Dalton sdalt...@gmail.com wrote:
 Hello Jose,
 
   I'm hacking on version to integrate 0.5.0 changes and verify CUDA 7 
 support. I can make it available or send a pull request later this week to 
 start experimenting if anyone is interested.
 
 Steve
 
 On Tue, Apr 14, 2015 at 10:09 AM, Jose E. Roman jro...@dsic.upv.es wrote:
 Has anyone updated to CUDA 7?
 
 With clanguage=c++ now I get lots of warnings with CUSP v0.4.0. These 
 presumably will go away when updating to CUSP v0.5.0, but it turns out that 
 PETSc build is broken with CUSP v0.5.0 (a header file was moved, and some 
 other issues).
 
 Jose
 
 
 



[petsc-dev] CUDA 7

2015-04-14 Thread Jose E. Roman
Has anyone updated to CUDA 7?

With clanguage=c++ now I get lots of warnings with CUSP v0.4.0. These 
presumably will go away when updating to CUSP v0.5.0, but it turns out that 
PETSc build is broken with CUSP v0.5.0 (a header file was moved, and some other 
issues).

Jose



Re: [petsc-dev] -ksp_view

2015-03-26 Thread Jose E. Roman
Try -ksp_view_pre


El 26/03/2015, a las 15:36, Matthew Knepley escribió:

 If the solve fails, we never see the output, so I think we need something like
 
   -ksp_preview
 
 which outputs the view before the solve is run. Am I wrong here?
 
   Thanks,
 
  Matt
 
 -- 
 What most experimenters take for granted before they begin their experiments 
 is infinitely more interesting than any results to which their experiments 
 lead.
 -- Norbert Wiener



Re: [petsc-dev] Symmetry acceleration of the Jacobi-Davidson method (in SLEPc)

2015-02-13 Thread Jose E. Roman

El 13/02/2015, a las 15:06, Krzysztof Gawarecki escribió:

 Dear All,
 
 I'm calculating eigenvalues and eigenvectors of the matrix which has specific 
 kind of symmetry.
 Due to this symmetry I obtain the eigenvalues which are doubly degenerated. 
 So eg. eigeinvalue 'e1' has eigenvectors 'a1' and 'b1'. These eigenvectors 
 are related to each other by the relation a1 = T b1, where T is a matrix 
 (given for my problem).
 So it is enough to calculate only one eigenvector for each eigenvalue (and 
 the second one can be calculated by matvec operation). This situation has 
 been described in http://dl.acm.org/citation.cfm?id=2494747.
 
 How could I take advantage on this in EPSSolve in Jacobi-Davidson method? 
 Could I add two vectors to the subspace (the second one would be calculated 
 by multiplying the first one by matrix T) in every iteration? Should I modify 
 function dvd_updateV_update_gen in dvd_updatev.c ? 
 
 I would be very grateful for any suggestion.
 
 Krzysztof
 

We do not provide a flexible way for user-provided subspace expansions, so yes 
you can try this route by modifying that function yourself. Sent an email to 
slepc-maint if you get stuck.

Jose



[petsc-dev] Force synchronization of CUSP vector

2015-02-02 Thread Jose E. Roman
I would like to force the copy of a VECCUSP from/to the GPU. I need this in 
user code, in particular from within a shell matrix MatMult.

Both VecCUSPCopyToGPU() and VecCUSPCopyToGPU_Public() are declared PETSC_INTERN 
in a private header. The only public function is VecCUSPCopyToGPUSome_Public(). 
In the From variants, VecCUSPCopyFromGPU() is PETSC_EXTERN but declared in a 
private header.

Shouldn't these functions be public?

Jose



Re: [petsc-dev] Sean is going to love this

2014-12-24 Thread Jose E. Roman

El 23/12/2014, a las 20:38, Sean Farley escribió:

 4) Better coordination with dependent packages
 
 This item is hard to implement because it's out of the PETSc team's
 control. For example, packages like SLEPc depend on PETSc but don't have
 as good of a build system. SLEPc can't be built with a version of SLEPc
 already installed in the prefix. This is unnecessarily cumbersome for
 end-users.

SLEPc issues a warning in this case, but does not prevent building. I have 
removed the warning, since it is no longer necessary. Please report any 
problems.

Jose



[petsc-dev] Can ViennaCL coexist with Cusp?

2014-12-04 Thread Jose E. Roman
It seems ViennaCL and Cusp are exclusive. From vecimpl.h:

#if defined(PETSC_HAVE_CUSP)
  PetscCUSPFlag  valid_GPU_array;/* indicates where the most 
recently modified vector data is (GPU or CPU) */
  void   *spptr; /* if we're using CUSP, then this is the 
special pointer to the array on the GPU */
#endif
#if defined(PETSC_HAVE_VIENNACL)
  PetscViennaCLFlag  valid_GPU_array;/* indicates where the most 
recently modified vector data is (GPU or CPU) */
  void   *spptr; /* if we're using ViennaCL, then this is the 
special pointer to the array on the GPU */
#endif


So I guess configure should complain when both are enabled at configure time, 
otherwise the build fails.

Jose



Re: [petsc-dev] unwind next branch

2014-09-10 Thread Jose E. Roman


  origin/jose/mumps-bugfix

-- remove  (those fixes have already been included in other commits by Hong.)

Jose



Re: [petsc-dev] PetscSplitReduction

2014-09-10 Thread Jose E. Roman

El 09/07/2014, a las 23:39, Jed Brown escribió:

 Satish Balay ba...@mcs.anl.gov writes:
 
 merged to maint now.
 
 Satish, we can't have this non-namespaced stuff in 'maint' (it really
 can break user code).  The struct definition should really be private
 (so if Jose needs to access fields, we need to write interface
 functions).
 
 I'm sorry about my radio silence on this issue.  The travel has been
 more disruptive than usual for having time to write code.


This is still not done. Please tell me how you would like to have it done and I 
will do it.

My request was to make PetscSplitReductionGet, PetscSplitReductionEnd, 
PetscSplitReductionExtend public.

My initial commit was: https://bitbucket.org/petsc/petsc/commits/e8058a0 
(branch jose/split-reduction).

Thanks.
Jose





Re: [petsc-dev] PetscSplitReduction

2014-06-10 Thread Jose E. Roman

El 08/06/2014, a las 13:13, Jose E. Roman escribió:

 
 El 08/06/2014, a las 12:57, Jed Brown escribió:
 
 Jose E. Roman jro...@dsic.upv.es writes:
 
 Would it be too much asking that PetscSplitReduction be available in a
 public header? (together with the functions PetscSplitReductionGet,
 PetscSplitReductionEnd, PetscSplitReductionExtend).
 
 Sounds reasonable to me.  Do you want to prepare a patch?  Otherwise I
 can do it later today.
 
 
 I would rather leave it to you since you will probably want to add manpages 
 for the three mentioned functions.
 
 Thanks.
 
 

I started with this:
https://bitbucket.org/petsc/petsc/commits/e8058a0

You may want to namespace SRState and add manpages.

Jose



[petsc-dev] PetscSplitReduction

2014-06-08 Thread Jose E. Roman
Would it be too much asking that PetscSplitReduction be available in a public 
header? (together with the functions PetscSplitReductionGet, 
PetscSplitReductionEnd, PetscSplitReductionExtend).

I know this request comes too close to the release date.
Jose



Re: [petsc-dev] PetscSplitReduction

2014-06-08 Thread Jose E. Roman

El 08/06/2014, a las 12:57, Jed Brown escribió:

 Jose E. Roman jro...@dsic.upv.es writes:
 
 Would it be too much asking that PetscSplitReduction be available in a
 public header? (together with the functions PetscSplitReductionGet,
 PetscSplitReductionEnd, PetscSplitReductionExtend).
 
 Sounds reasonable to me.  Do you want to prepare a patch?  Otherwise I
 can do it later today.
 

I would rather leave it to you since you will probably want to add manpages for 
the three mentioned functions.

Thanks.




[petsc-dev] Problem with generateetags.py

2014-04-04 Thread Jose E. Roman
I am getting an error when generating the tags, see error below. I have tracked 
the problem down to this offending commit:
https://bitbucket.org/petsc/petsc/commits/326299b7

It started to appear when the TAO users guide was placed under src.
A simple fix is to remove *.tex from generateetags.py
Do you want me to change this?

Jose


$ make alletags
Traceback (most recent call last):
 File bin/maint/generateetags.py, line 154, in module
   main(ctags)
 File bin/maint/generateetags.py, line 143, in main
   os.path.walk(os.getcwd(),processDir,[etagfile,ctagfile])
 File /usr/lib/python2.7/posixpath.py, line 246, in walk
   walk(name, func, arg)
 File /usr/lib/python2.7/posixpath.py, line 246, in walk
   walk(name, func, arg)
 File /usr/lib/python2.7/posixpath.py, line 246, in walk
   walk(name, func, arg)
 File /usr/lib/python2.7/posixpath.py, line 238, in walk
   func(arg, top, names)
 File bin/maint/generateetags.py, line 94, in processDir
   if newls: createTags(etagfile,ctagfile,dirname,newls)
 File bin/maint/generateetags.py, line 62, in createTags
   raise RuntimeError(Error running ctags +output)
RuntimeError: Error running ctags ctags: /home/jroman/soft/petsc/CTAGS 
doesn't look like a tag file; I refuse to overwrite it.
make: [alletags] Error 1 




Re: [petsc-dev] Problem with generateetags.py

2014-04-04 Thread Jose E. Roman
After git clean it worked.
Many thanks.
Jose


El 04/04/2014, a las 17:24, Satish Balay escribió:

 The code was was added in a few months back - and there is no reported
 issue [by petsc-dev users] or in nightlybuilds since then - so this
 issue must be something else.
 
 Is it reproduceable in a clean repo?
 
 git clean -f -d -x
 
 I think we need the *.tex files in 'etags' file [and currently we
 process the same files for ctags aswell]
 
 Satish
 
 
 On Fri, 4 Apr 2014, Barry Smith wrote:
 
 
  Jose,
 
I’m sorry I don’t understand this bug report. I don’t see a *.tex in 
 generateetags.py nor does it crash for me.Is it crashing in a particular 
 directory?
 
   Thanks
 
Barry
 
 
 
 On Apr 4, 2014, at 4:17 AM, Jose E. Roman jro...@dsic.upv.es wrote:
 
 I am getting an error when generating the tags, see error below. I have 
 tracked the problem down to this offending commit:
 https://bitbucket.org/petsc/petsc/commits/326299b7
 
 It started to appear when the TAO users guide was placed under src.
 A simple fix is to remove *.tex from generateetags.py
 Do you want me to change this?
 
 Jose
 
 
 $ make alletags
 Traceback (most recent call last):
 File bin/maint/generateetags.py, line 154, in module
  main(ctags)
 File bin/maint/generateetags.py, line 143, in main
  os.path.walk(os.getcwd(),processDir,[etagfile,ctagfile])
 File /usr/lib/python2.7/posixpath.py, line 246, in walk
  walk(name, func, arg)
 File /usr/lib/python2.7/posixpath.py, line 246, in walk
  walk(name, func, arg)
 File /usr/lib/python2.7/posixpath.py, line 246, in walk
  walk(name, func, arg)
 File /usr/lib/python2.7/posixpath.py, line 238, in walk
  func(arg, top, names)
 File bin/maint/generateetags.py, line 94, in processDir
  if newls: createTags(etagfile,ctagfile,dirname,newls)
 File bin/maint/generateetags.py, line 62, in createTags
  raise RuntimeError(Error running ctags +output)
 RuntimeError: Error running ctags ctags: /home/jroman/soft/petsc/CTAGS 
 doesn't look like a tag file; I refuse to overwrite it.
 make: [alletags] Error 1 
 
 
 
 



Re: [petsc-dev] MatAXPY does not increase state

2014-04-03 Thread Jose E. Roman

El 02/04/2014, a las 18:21, Jed Brown escribió:

 Jose E. Roman jro...@dsic.upv.es writes:
 
 Hi.
 
 We are having problems since the MatStructure flag was removed from 
 KSPSetOperators.
 https://bitbucket.org/petsc/petsc/commits/b37f9b8
 
 Tracking our problem leads to blame MatAXPY, which does not increase
 the state of the first Mat argument. The attached patch fixes the
 problem for us. Is this the right thing to do?
 
 Sort of.
 
 diff --git a/src/mat/impls/aij/seq/aij.c b/src/mat/impls/aij/seq/aij.c
 index f05b858..91a51d5 100644
 --- a/src/mat/impls/aij/seq/aij.c
 +++ b/src/mat/impls/aij/seq/aij.c
 @@ -2862,6 +2862,7 @@ PetscErrorCode MatAXPY_SeqAIJ(Mat Y,PetscScalar a,Mat 
 X,MatStructure str)
 ierr = MatHeaderReplace(Y,B);CHKERRQ(ierr);
 ierr = PetscFree(nnz);CHKERRQ(ierr);
   }
 +  ierr = PetscObjectStateIncrease((PetscObject)Y);CHKERRQ(ierr);
   PetscFunctionReturn(0);
 }
 
 I would rather move this up to the place where memory is actually
 accessed.  Vec typically uses VecGetArray to manage access (and state),
 while Mat usually uses direct access.  There may be many places with
 this issue.
 
 diff --git a/src/mat/utils/gcreate.c b/src/mat/utils/gcreate.c
 index 55665bf..eb8fd76 100644
 --- a/src/mat/utils/gcreate.c
 +++ b/src/mat/utils/gcreate.c
 @@ -336,8 +336,9 @@ PetscErrorCode MatHeaderMerge(Mat A,Mat C)
 #define __FUNCT__ MatHeaderReplace
 PETSC_EXTERN PetscErrorCode MatHeaderReplace(Mat A,Mat C)
 {
 -  PetscErrorCode ierr;
 -  PetscInt   refct;
 +  PetscErrorCode   ierr;
 +  PetscInt refct;
 +  PetscObjectState state;
 
   PetscFunctionBegin;
   PetscValidHeaderSpecific(A,MAT_CLASSID,1);
 @@ -356,9 +357,11 @@ PETSC_EXTERN PetscErrorCode MatHeaderReplace(Mat A,Mat 
 C)
 
   /* copy C over to A */
   refct = ((PetscObject)A)-refct;
 +  state = ((PetscObject)A)-state;
   ierr  = PetscMemcpy(A,C,sizeof(struct _p_Mat));CHKERRQ(ierr);
 
   ((PetscObject)A)-refct = refct;
 +  ((PetscObject)A)-state = state;
 
 Surely this should increment state so that someone holding a reference
 to A can tell that it has changed.
 
   ierr = PetscFree(C);CHKERRQ(ierr);
   PetscFunctionReturn(0);

I have created a pull request with the change in MatHeaderReplace, with state+1.
Regarding the other changes in MatAXPY, I don't know where they should go, so I 
left it unchanged. Hence, the problem still exists in the case of same and 
subset nonzero pattern.

Jose




[petsc-dev] MatAXPY does not increase state

2014-04-02 Thread Jose E. Roman
Hi.

We are having problems since the MatStructure flag was removed from 
KSPSetOperators.
https://bitbucket.org/petsc/petsc/commits/b37f9b8

Tracking our problem leads to blame MatAXPY, which does not increase the state 
of the first Mat argument. The attached patch fixes the problem for us. Is this 
the right thing to do?

Jose



patch-mataxpy
Description: Binary data


[petsc-dev] options_left

2014-01-07 Thread Jose E. Roman
Now in 'master' mistyped/unused options are not displayed unless -options_left 
is specified.
Is this intentional? If so, this change should appear in the list of changes.

Another question: I could use some of the additions to branch 
prbrune/removeunwrappedmathfunctions, but this branch has not been merged to 
'master' yet (the corresponding pull request merged it to 'next'). Is it an 
oversight?

Jose



Re: [petsc-dev] Migrate SLEPc to bitbucket

2013-07-09 Thread Jose E. Roman

El 09/07/2013, a las 03:01, Jed Brown escribió:

 Jose E. Roman jro...@dsic.upv.es writes:
 
 Now that slepc-3.4 is out, I would like to migrate the SLEPc
 repository to bitbucket. I think it is good to have a user experience
 and workflow as close to PETSc as possible.
 
 Great, would you prefer this to be at
 
  https://bitbucket.org/petsc/slepc
 
 or with your own team account?

I don't know. Maybe it is better to have a slepc team account, then I would add 
you as an administrator. Is this ok?

 
 To avoid sending lots of emails to the list, I would appreciate if
 anyone can volunteer to help me in the process (converting from svn to
 git, creating users and branches, and so on).
 
 I can help with this.  I need names and email addresses for all the
 contributors.

I'll send this info in a separate email.
Thanks,
Jose



[petsc-dev] Migrate SLEPc to bitbucket

2013-07-08 Thread Jose E. Roman
Now that slepc-3.4 is out, I would like to migrate the SLEPc repository to 
bitbucket. I think it is good to have a user experience and workflow as close 
to PETSc as possible.

To avoid sending lots of emails to the list, I would appreciate if anyone can 
volunteer to help me in the process (converting from svn to git, creating users 
and branches, and so on).

Jose



  1   2   >