[petsc-users] Cell type for DMPlexCreateFromCellList

2018-02-22 Thread Danyang Su

Hi All,

What cell types does DMPlexCreateFromCellList support? I test this with 
triangle, tetrahedron and prism. Both triangle and tetrahedron work but 
prism mesh throws error saying "Cone size 6 not supported for dimension 3".


Could anyone tell me all the supported cell types?

Thanks,

Danyang



[petsc-users] Inertia of Hermitian Matrix

2018-02-22 Thread Anthony Ruth
Hello,

I am trying to diagonalize a hermitian matrix using the Eigen Problem
Solver in SLEPc, I run into errors on calls to MatGetInertia() with complex
hermitian matrices that I did not see with real matrices. The complex and
real versions were done with separate PETSC_ARCH. I do not know if the
problem is with the set up of the matrix or more generally a problem
calculating the inertia for a complex matrix.
The matrix is created by:

ierr = MatSetType(A,MATAIJ);CHKERRQ(ierr);
ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N);CHKERRQ(ierr);
ierr = MatSetFromOptions(A);CHKERRQ(ierr);
ierr = MatSetUp(A);CHKERRQ(ierr);
ierr = MatSetOption(A,MAT_HERMITIAN,PETSC_TRUE);CHKERRQ(ierr);
ierr = MatGetOwnershipRange(A,_row,_row);CHKERRQ(ierr);
ierr = MatSetValues(A,m,idxm,n,idxn,data,INSERT_VALUES);CHKERRQ(ierr);
ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);

For a hermitian matrix, all the eigenvalues are real, so I believe it is
possible to calculate an inertia by looking at the signs of the diagonal
entries. I believe if it was complex but not hermitian, the complex
eigenvalues calculating inertia would be difficult.  Is there some problem
with doing this through sparse iterative methods? Is there a certain place
the matrix needs to be specified as hermitian besides upon assembly?

Here is the error stack I see when running:


Mat Object: 1 MPI processes
  type: seqaij
row 0: (0, 0.)  (1, 1. + 1. i) (2, 0.)  (3, 0.)  (4, 0.)
row 1: (0, 1. - 1. i) (1, 0.)  (2, 1. + 1. i) (3, 0.)  (4, 0.)
row 2: (0, 0.)  (1, 1. - 1. i) (2, 0.)  (3, 1. + 1. i) (4, 0.)
row 3: (0, 0.)  (1, 0.)  (2, 1. - 1. i) (3, 0.)  (4, 1. + 1. i)
row 4: (0, 0.)  (1, 0.)  (2, 0.)  (3, 1. - 1. i) (4, 0.)
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Mat type mumps
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017
[0]PETSC ERROR: Configure options --download-metis --download-mumps
--download-parmetis --download-scalapack --with-scalar-type=complex
[0]PETSC ERROR: #1 MatGetInertia() line 8416 in
/home/anthony/DFTB+SIPs/petsc-3.8.2/src/mat/interface/matrix.c
[0]PETSC ERROR: #2 EPSSliceGetInertia() line 333 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c
[0]PETSC ERROR: #3 EPSSetUp_KrylovSchur_Slice() line 459 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c
[0]PETSC ERROR: #4 EPSSetUp_KrylovSchur() line 146 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/krylovschur.c
[0]PETSC ERROR: #5 EPSSetUp() line 165 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssetup.c
[0]PETSC ERROR: #6 EPSSliceGetEPS() line 298 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c
[0]PETSC ERROR: #7 EPSSetUp_KrylovSchur_Slice() line 408 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c
[0]PETSC ERROR: #8 EPSSetUp_KrylovSchur() line 146 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/krylovschur.c
[0]PETSC ERROR: #9 EPSSetUp() line 165 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssetup.c
[0]PETSC ERROR: #10 SIPSolve() line 195 in
/home/anthony/DFTB+SIPs/dftb-eig15/sips.c
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: Wrong type of object: Parameter # 1
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017
[0]PETSC ERROR: Configure options --download-metis --download-mumps
--download-parmetis --download-scalapack --with-scalar-type=complex
[0]PETSC ERROR: #11 EPSGetConverged() line 257 in
/home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssolve.c
[0]PETSC ERROR: #12 squareFromEPS() line 131 in
/home/anthony/DFTB+SIPs/dftb-eig15/sips_square.c



regards,
Anthony Ruth
Condensed Matter Theory
University of Notre Dame


Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-22 Thread Matthew Knepley
On Thu, Feb 22, 2018 at 6:10 PM, Danyang Su  wrote:

> Hi Matt,
>
> Just to let you know that after updating to PETSc 3.8.3 version, the
> DMPlexCreateSection in my code now works.
>
> One more question, what is the PETSC_NULL_XXX for IS pointer, as shown
> below, in C code, it just pass NULL, but in fortran, what is the name of
> null object for pBcCompIS and pBcPointIS.
>
Fortran does not "use pointers", so we need to pass a real object that we
then convert to a NULL pointer for C.
PETSC_NULL_IS is a real IS object in Fortran that then gets converted to
NULL before calling the C function.

  Thanks,

Matt

> call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,   &
>  numFields,pNumComp,pNumDof,   &
>  numBC,pBcField,   &
>  pBcCompIS,pBcPointIS, &
>  PETSC_NULL_IS,section,ierr)
> CHKERRQ(ierr)
>
> Thanks,
>
> Danyang
>
>
> On 18-02-21 09:22 AM, Danyang Su wrote:
>
> Hi Matt,
>
> To test the Segmentation Violation problem in my code, I modified the
> example ex1f90.F to reproduce the problem I have in my own code.
>
> If use DMPlexCreateBoxMesh to generate the mesh, the code works fine.
> However, if I use DMPlexCreateGmshFromFile, using the same mesh exported
> from "DMPlexCreateBoxMesh", it gives Segmentation Violation error.
>
> Did I miss something in the input mesh file? My first guess is the label
> "marker" used in the code, but I couldn't find any place to set this label.
>
> Would you please let me know how to solve this problem. My code is done in
> a similar way as ex1f90, it reads mesh from external file or creates from
> cell list, distributes the mesh (these already work), and then creates
> sections and sets ndof to the nodes.
>
> Thanks,
>
> Danyang
>
> On 18-02-20 10:07 AM, Danyang Su wrote:
>
> On 18-02-20 09:52 AM, Matthew Knepley wrote:
>
> On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su  wrote:
>
>> Hi All,
>>
>> I tried to compile the DMPlexCreateSection code but got error information
>> as shown below.
>>
>> Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type
>>
>> I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code
>> can be compiled but run into Segmentation Violation error in
>> DMPlexCreateSection.
>>
> From the webpage
>
>   http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/
> DMPlexCreateSection.html
>
> The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays
> would have been too painful.
>
> Hi Matt,
>
> Sorry, I still cannot compile the code if use DMPlexCreateSectionF90
> instead of DMPlexCreateSection. Would you please tell me in more details?
>
> undefined reference to `dmplexcreatesectionf90_'
>
> then I #include , but this throws more
> error during compilation.
>
>
> Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/
> petscdmplex.h90:6:
> Included at ../../solver/solver_ddmethod.F90:62:
>
>   PETSCSECTION_HIDE section
>   1
> Error: Unclassifiable statement at (1)
> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/
> ftn-custom/petscdmplex.h90:167.10:
> Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/
> petscdmplex.h90:6:
> Included at ../../solver/solver_ddmethod.F90:62:
>
>   PETSCSECTION_HIDE section
>   1
> Error: Unclassifiable statement at (1)
> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/
> ftn-custom/petscdmplex.h90:179.10:
> Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/
> petscdmplex.h90:6:
> Included at ../../solver/solver_ddmethod.F90:62:
>
>
>   Thanks,
>
>  Matt
>
>> dmda_flow%da is distributed dm object that works fine.
>>
>> The fortran example I follow is http://www.mcs.anl.gov/petsc/p
>> etsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90.
>>
>> What parameters should I use if passing null to bcField, bcComps,
>> bcPoints and perm.
>>
>> PetscErrorCode 
>> 
>>  DMPlexCreateSection 
>> (DM
>>   
>> dm, PetscInt 
>> 
>>  dim, PetscInt 
>> 
>>  numFields,const PetscInt 
>> 
>>  numComp[],const PetscInt 
>> 
>>  numDof[], PetscInt 
>> 

Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-22 Thread Danyang Su

Hi Matt,

Just to let you know that after updating to PETSc 3.8.3 version, the 
DMPlexCreateSection in my code now works.


One more question, what is the PETSC_NULL_XXX for IS pointer, as shown 
below, in C code, it just pass NULL, but in fortran, what is the name of 
null object for pBcCompIS and pBcPointIS.


    call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,   &
numFields,pNumComp,pNumDof,   &
numBC,pBcField,   &
pBcCompIS,pBcPointIS, &
 PETSC_NULL_IS,section,ierr)
    CHKERRQ(ierr)

Thanks,

Danyang

On 18-02-21 09:22 AM, Danyang Su wrote:


Hi Matt,

To test the Segmentation Violation problem in my code, I modified the 
example ex1f90.F to reproduce the problem I have in my own code.


If use DMPlexCreateBoxMesh to generate the mesh, the code works fine. 
However, if I use DMPlexCreateGmshFromFile, using the same mesh 
exported from "DMPlexCreateBoxMesh", it gives Segmentation Violation 
error.


Did I miss something in the input mesh file? My first guess is the 
label "marker" used in the code, but I couldn't find any place to set 
this label.


Would you please let me know how to solve this problem. My code is 
done in a similar way as ex1f90, it reads mesh from external file or 
creates from cell list, distributes the mesh (these already work), and 
then creates sections and sets ndof to the nodes.


Thanks,

Danyang


On 18-02-20 10:07 AM, Danyang Su wrote:

On 18-02-20 09:52 AM, Matthew Knepley wrote:
On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su > wrote:


Hi All,

I tried to compile the DMPlexCreateSection code but got error
information as shown below.

Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then
the code can be compiled but run into Segmentation Violation
error in DMPlexCreateSection.

From the webpage

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html 



The F90 version is DMPlexCreateSectionF90. Doing this with F77 
arrays would have been too painful.

Hi Matt,

Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 
instead of DMPlexCreateSection. Would you please tell me in more 
details?


undefined reference to `dmplexcreatesectionf90_'

then I #include , but this throws 
more error during compilation.



    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:



  Thanks,

     Matt

dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is

http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90

.


What parameters should I use if passing null to bcField,
bcComps, bcPoints and perm.

PetscErrorCode


  DMPlexCreateSection

(DM
 
 dm,PetscInt


  dim,PetscInt


  numFields,constPetscInt


  numComp[],constPetscInt


  numDof[],PetscInt


  numBC,constPetscInt


  bcField[],
constIS
 
 bcComps[], constIS
 
 bcPoints[],IS

Re: [petsc-users] question about MatInvertBlockDiagonal_SeqBAIJ

2018-02-22 Thread Smith, Barry F.

  Removed in the branch barry/remove-mdiag-baij

  Barry


> On Feb 22, 2018, at 6:55 AM, Xiangdong  wrote:
> 
> Hello everyone,
> 
> I am curious about the purpose of mdiag in MatInverseBlockDiagonal_SeqBAIJ
> 
> http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/seq/baij.c.html
> 
> It seems that the inverse of the diagonal block is stored is a->idiag, and 
> the extra copy of diagonal block itself is stored in mdiag or  
> a->idiag+bs2*mbs. What is the purpose of storing this mdiag as an extra copy 
> of diagonal block? When will this mdiag be used?
> 
> Thank you.
> 
> Best,
> Xiangdong
> 
> 
> if (!a->idiag) {
>  38: PetscMalloc1(2*bs2*mbs,>idiag);
>  39: PetscLogObjectMemory((PetscObject)A,2*bs2*mbs*sizeof(PetscScalar));
>  40:   }
>  41:   diag  = a->idiag;
>  42:   mdiag = a->idiag+bs2*mbs;
> 
> 
> 138: for (i=0; i 139:   odiag  = v + bs2*diag_offset[i];
> 140:   PetscMemcpy(diag,odiag,bs2*sizeof(PetscScalar));
> 141:   PetscMemcpy(mdiag,odiag,bs2*sizeof(PetscScalar));
> 142:   
> PetscKernel_A_gets_inverse_A(bs,diag,v_pivots,v_work,allowzeropivot,);
> 143:   if (zeropivotdetected) A->factorerrortype = 
> MAT_FACTOR_NUMERIC_ZEROPIVOT;
> 144:   diag  += bs2;
> 145:   mdiag += bs2;
> 



Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-22 Thread Smith, Barry F.

  First run under valgrind to look for memory issues.

   Second I would change 1 thing at a time. So use the intel 2017 compiler with 
PETSc 2.8.3 so the only change is your needed changes to match 2.8.3 and does 
not include a compiler change.

   I am not sure what numbers you are printing below but often changing 
optimization levels can and will change numerical values slightly so change in 
numerical values may not indicate anything is wrong (or it may indicate 
something is wrong depending on how different the numerical values are).

   Barry


> On Feb 21, 2018, at 10:23 PM, TAY wee-beng  wrote:
> 
> 
> On 21/2/2018 11:44 AM, Smith, Barry F. wrote:
>>   Did you follow the directions in the changes file for 3.8?
>> 
>> Replace calls to DMDACreateXd() with DMDACreateXd(), 
>> [DMSetFromOptions()] DMSetUp()
>> DMDACreateXd() no longer can take negative values for dimensons, 
>> instead pass positive values and call DMSetFromOptions() immediately 
>> after
>> 
>> I suspect you are not calling DMSetUp() and this is causing the problem.
>> 
>>   Barry
> Ops sorry, indeed I didn't change that part. Got it compiled now.
> 
> However, I have got a new problem. Previously, I was using Intel 2016 with 
> PETSc 3.7.6. During compile, I used -O3 for all modules except one, which 
> will give error (due to DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90). 
> Hence, I need to use -O1.
> 
> Now, I'm using Intel 2018 with PETSc 3.8.3 and I got the error:
> 
> M Diverged but why?, time =2
>  reason =   -9
> 
> I tried to change all *.F90 from using -O3 to -O1 and although there's no 
> diverged err printed, my values are different:
> 
> 1  0.0160  0.46655767  0.46310378  1.42427154 
> -0.81598016E+02 -0.11854431E-01  0.42046197E+06
>2  0.00956350  0.67395693  0.64698638 1.44166606 
> -0.12828928E+03  0.12179394E-01  0.41961824E+06
> 
> vs
> 
> 1  0.0160  0.49096543  0.46259333  1.41828130 
> -0.81561221E+02 -0.16146574E-01  0.42046335E+06
>2  0.00956310  0.68342495  0.63682485 1.44353571 
> -0.12813998E+03  0.24226242E+00  0.41962121E+06
> 
> The latter values are obtained using the debug built and they compared 
> correctly with another cluster, which use GNU.
> 
> What going on and how should I troubleshoot?
> Thanks
>> 
>> 
>>> On Feb 20, 2018, at 7:35 PM, TAY wee-beng  wrote:
>>> 
>>> 
>>> On 21/2/2018 10:47 AM, Smith, Barry F. wrote:
   Try setting
 
   u_global = tVec(1)
 
   immediately before the call to DMCreateGlobalVector()
 
 
>>> Hi,
>>> 
>>> I added the line in but still got the same error below. Btw, my code is 
>>> organised as:
>>> 
>>> module global_data
>>> 
>>> #include "petsc/finclude/petsc.h"
>>> use petsc
>>> use kdtree2_module
>>> implicit none
>>> save
>>> ...
>>> Vec u_local,u_global ...
>>> ...
>>> contains
>>> 
>>> subroutine allo_var
>>> ...
>>> u_global = tVec(1)
>>> call DMCreateGlobalVector(da_u,u_global,ierr)
>>> ...
>>> 
>>> 
>>> 
>>> 
>>> [0]PETSC ERROR: - Error Message 
>>> 
>>> --
>>> [0]PETSC ERROR: Null argument, when expecting valid pointer
>>> [0]PETSC ERROR: Null Object: Parameter # 2
>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
>>> trou
>>> ble shooting.
>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a 
>>> petsc-3.8.
>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018
>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" 
>>> --with-fc="win32fe ifo
>>> rt" --with-cxx="win32fe icl" --download-fblaslapack 
>>> --with-mpi-include="[/cygdri
>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program 
>>> Files (x
>>> 86)/Microsoft SDKs/MPI/Include/x64]" 
>>> --with-mpi-mpiexec="/cygdrive/c/Program Fil
>>> es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 
>>> --with-file-create-pause=1
>>> --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 
>>> --with-mpi-lib="[/c
>>> ygdrive/c/Program Files (x86)/Microsoft 
>>> SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/
>>> c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" 
>>> --with-shared-libra
>>> ries=0
>>> [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in 
>>> C:\Source\PETSC-~2.3\
>>> src\vec\vec\INTERF~1\vector.c
>>> [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in 
>>> C:\Source\PETSC-~2.3\src
>>> \dm\impls\da\dadist.c
>>> [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in 
>>> C:\Source\PETSC-~2.3\src\d
>>> m\INTERF~1\dm.c
>>> 
>>> Thanks.
> On Feb 20, 2018, at 6:40 PM, TAY wee-beng  wrote:
> 
> Hi,
> 
> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to 
> debug step by step. I got into problem when calling:

Re: [petsc-users] question about MatInvertBlockDiagonal_SeqBAIJ

2018-02-22 Thread Smith, Barry F.

   It looks like this was copied from the AIJ format but is never used for 
BAIJ. Could probably be removed.


   Barry


> On Feb 22, 2018, at 6:55 AM, Xiangdong  wrote:
> 
> Hello everyone,
> 
> I am curious about the purpose of mdiag in MatInverseBlockDiagonal_SeqBAIJ
> 
> http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/seq/baij.c.html
> 
> It seems that the inverse of the diagonal block is stored is a->idiag, and 
> the extra copy of diagonal block itself is stored in mdiag or  
> a->idiag+bs2*mbs. What is the purpose of storing this mdiag as an extra copy 
> of diagonal block? When will this mdiag be used?
> 
> Thank you.
> 
> Best,
> Xiangdong
> 
> 
> if (!a->idiag) {
>  38: PetscMalloc1(2*bs2*mbs,>idiag);
>  39: PetscLogObjectMemory((PetscObject)A,2*bs2*mbs*sizeof(PetscScalar));
>  40:   }
>  41:   diag  = a->idiag;
>  42:   mdiag = a->idiag+bs2*mbs;
> 
> 
> 138: for (i=0; i 139:   odiag  = v + bs2*diag_offset[i];
> 140:   PetscMemcpy(diag,odiag,bs2*sizeof(PetscScalar));
> 141:   PetscMemcpy(mdiag,odiag,bs2*sizeof(PetscScalar));
> 142:   
> PetscKernel_A_gets_inverse_A(bs,diag,v_pivots,v_work,allowzeropivot,);
> 143:   if (zeropivotdetected) A->factorerrortype = 
> MAT_FACTOR_NUMERIC_ZEROPIVOT;
> 144:   diag  += bs2;
> 145:   mdiag += bs2;
> 



Re: [petsc-users] Request for being added into mailing list

2018-02-22 Thread Satish Balay
Added now.

Note: Normally one can subscribe with instructions from

http://www.mcs.anl.gov/petsc/miscellaneous/mailing-lists.html

Satish

On Thu, 22 Feb 2018, Lin Tu wrote:

> Dear Petsc team,
> 
> 
> I'm new and please add me in the mailing list, thank you very much in advance!
> 
> 
> Sincerely,
> 
> Lin Tu
> 
> 
> 
> Lin (Ruby) Tu
> PhD student
> Institute for Astrophysics
> University of Vienna
> T??rkenschanzstra??e 17
> 1180 Wien
> 
> 



[petsc-users] question about MatInvertBlockDiagonal_SeqBAIJ

2018-02-22 Thread Xiangdong
Hello everyone,

I am curious about the purpose of mdiag in MatInverseBlockDiagonal_SeqBAIJ

http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/seq/baij.c.html

It seems that the inverse of the diagonal block is stored is a->idiag, and
the extra copy of diagonal block itself is stored in mdiag or
a->idiag+bs2*mbs. What is the purpose of storing this mdiag as an extra
copy of diagonal block? When will this mdiag be used?

Thank you.

Best,
Xiangdong


if (!a->idiag) {
 38: PetscMalloc1(2*bs2*mbs,>idiag);
 39: PetscLogObjectMemory((PetscObject)A,2*bs2*mbs*sizeof(PetscScalar));
 40:   }
 41:   diag  = a->idiag;
 42:   mdiag = a->idiag+bs2*mbs;


138: for (i=0; ifactorerrortype =
MAT_FACTOR_NUMERIC_ZEROPIVOT;
144:   diag  += bs2;
145:   mdiag += bs2;


Re: [petsc-users] Compiling with PETSc 64-bit indices

2018-02-22 Thread Matthew Knepley
On Thu, Feb 22, 2018 at 1:24 AM, TAY wee-beng  wrote:

>
> On 21/2/2018 9:12 AM, Matthew Knepley wrote:
>
> On Tue, Feb 20, 2018 at 8:08 PM, TAY wee-beng  wrote:
>
>>
>> On 21/2/2018 9:00 AM, Matthew Knepley wrote:
>>
>> On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng  wrote:
>>
>>> Hi,
>>>
>>> When I run my CFD code with a grid size of 1119x1119x499 ( total grid
>>> size =624828339 ), I got the error saying I need to compile PETSc with
>>> 64-bit indices.
>>>
>>> So I tried to compile PETSc again and then compile my CFD code with the
>>> newly compiled PETSc. However, now I got segmentation error:
>>>
>>> rm: cannot remove `log': No such file or directory
>>> [409]PETSC ERROR: --
>>> --
>>> [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR:
>>> 
>>> [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>>> probably memory access out of range
>>> [410]PETSC ERROR: Try option -start_in_debugger or
>>> -on_error_attach_debugger
>>> [410]PETSC ERROR: [536]PETSC ERROR: --
>>> --
>>> [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>>> probably memory access out of range
>>> [536]PETSC ERROR: Try option -start_in_debugger or
>>> -on_error_attach_debugger
>>> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d
>>> ocumentation/faq.html#valgrind
>>> [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
>>> OS X to find memory corruption errors
>>> [536]PETSC ERROR: likely location of problem given in stack below
>>> [536]PETSC ERROR: -  Stack Frames
>>> 
>>> [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>>> available,
>>> [536]PETSC ERROR:   INSTEAD the line number of the start of the
>>> function
>>> [536]PETSC ERROR:   is given.
>>> [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581
>>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
>>> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d
>>> ocumentation/faq.html#valgrind
>>> [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
>>> OS X to find memory corruption errors
>>> [410]PETSC ERROR: likely location of problem given in stack below
>>> [410]PETSC ERROR: -  Stack Frames
>>> 
>>> [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>>> available,
>>> [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613
>>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
>>> [536]PETSC ERROR: [536] DMDACreate3d line 1434
>>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c
>>> [536]PETSC ERROR: - Error Message
>>> --
>>>
>>> The CFD code worked previously but increasing the problem size results
>>> in segmentation error. It seems to be related to DMDACreate3d and
>>> DMDASetOwnershipRanges. Any idea where the problem lies?
>>>
>>> Besides, I want to know when and why do I have to use PETSc with 64-bit
>>> indices?
>>>
>>
>> 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you have a
>> 3D velocity, pressure, and energy, you already have 3e9 unknowns,
>> before you even start to count nonzero entries in the matrix. 64-bit
>> integers allow you to handle these big sizes.
>>
>>
>>> Also, can I use the 64-bit indices version with smaller sized problems?
>>>
>>
>> 2) Yes
>>
>>
>>> And is there a speed difference between using the 32-bit and 64-bit
>>> indices ver?
>>
>>
>> 3) I have seen no evidence of this
>>
>> 4) My guess is that you have defines regular integers in your code and
>> passed them to PETSc, rather than using PetscInt as the type.
>>
>> Oh that seems probable. So I am still using integer(4) when it should be
>> integer(8) for some values, is that so? If I use PetscInt, is it the same
>> as integer(8)? Or does it depend on the actual number?
>>
>
> PetscInt will be integer(4) if you configure with 32-bit ints, and
> integer(8) if you configure with 64-bit ints. If you use it consistently,
> you can avoid problems
> with matching the PETSc API.
>
> I wonder if I replace all my integer to PetscInt, will there be a large
>> increase in memory usage, because all integer(4) now becomes integer(8)?
>>
>
> Only if you have large integer storage. Most codes do not.
>
> Hi,
>
> What do you mean by "large integer storage"?
>

You have some structure that stores 1B integers. Most codes do not.


> Btw, I got the following error when I ran a simple small test case with my
> CFD code:
>

You requested 7M TB of memory. This looks like an uninitialized integer.

  Thanks,

 Matt


> [0]PETSC ERROR: