Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread Smith, Barry F.

  Did you follow the directions in the changes file for 3.8?

Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] 
DMSetUp()
DMDACreateXd() no longer can take negative values for dimensons, 
instead pass positive values and call DMSetFromOptions() immediately after

I suspect you are not calling DMSetUp() and this is causing the problem.

  Barry


> On Feb 20, 2018, at 7:35 PM, TAY wee-beng  wrote:
> 
> 
> On 21/2/2018 10:47 AM, Smith, Barry F. wrote:
>>   Try setting
>> 
>>   u_global = tVec(1)
>> 
>>   immediately before the call to DMCreateGlobalVector()
>> 
>> 
> Hi,
> 
> I added the line in but still got the same error below. Btw, my code is 
> organised as:
> 
> module global_data
> 
> #include "petsc/finclude/petsc.h"
> use petsc
> use kdtree2_module
> implicit none
> save
> ...
> Vec u_local,u_global ...
> ...
> contains
> 
> subroutine allo_var
> ...
> u_global = tVec(1)
> call DMCreateGlobalVector(da_u,u_global,ierr)
> ...
> 
> 
> 
> 
> [0]PETSC ERROR: - Error Message 
> 
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Object: Parameter # 2
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trou
> ble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a 
> petsc-3.8.
> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018
> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe 
> ifo
> rt" --with-cxx="win32fe icl" --download-fblaslapack 
> --with-mpi-include="[/cygdri
> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files 
> (x
> 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program 
> Fil
> es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 
> --with-file-create-pause=1
> --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 
> --with-mpi-lib="[/c
> ygdrive/c/Program Files (x86)/Microsoft 
> SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/
> c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" 
> --with-shared-libra
> ries=0
> [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in 
> C:\Source\PETSC-~2.3\
> src\vec\vec\INTERF~1\vector.c
> [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in 
> C:\Source\PETSC-~2.3\src
> \dm\impls\da\dadist.c
> [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in 
> C:\Source\PETSC-~2.3\src\d
> m\INTERF~1\dm.c
> 
> Thanks.
>>> On Feb 20, 2018, at 6:40 PM, TAY wee-beng  wrote:
>>> 
>>> Hi,
>>> 
>>> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to 
>>> debug step by step. I got into problem when calling:
>>> 
>>> call DMCreateGlobalVector(da_u,u_global,ierr)
>>> 
>>> The error is:
>>> 
>>> [0]PETSC ERROR: - Error Message 
>>> 
>>> --
>>> [0]PETSC ERROR: Null argument, when expecting valid pointer
>>> [0]PETSC ERROR: Null Object: Parameter # 2
>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
>>> trou
>>> ble shooting.
>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a 
>>> petsc-3.8.
>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018
>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" 
>>> --with-fc="win32fe ifo
>>> rt" --with-cxx="win32fe icl" --download-fblaslapack 
>>> --with-mpi-include="[/cygdri
>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program 
>>> Files (x
>>> 
>>> But all I changed is from:
>>> 
>>> module global_data
>>> #include "petsc/finclude/petsc.h"
>>> use petsc
>>> use kdtree2_module
>>> implicit none
>>> save
>>> !grid variables
>>> 
>>> integer :: size_x,s
>>> 
>>> ...
>>> 
>>> to
>>> 
>>> module global_data
>>> use kdtree2_module
>>> implicit none
>>> save
>>> #include "petsc/finclude/petsc.h90"
>>> !grid variables
>>> integer :: size_x,s...
>>> 
>>> ...
>>> 
>>> da_u, u_global were declared thru:
>>> 
>>> DM  da_u,da_v,...
>>> DM  da_cu_types ...
>>> Vec u_local,u_global,v_local...
>>> 
>>> So what could be the problem?
>>> 
>>> 
>>> Thank you very much.
>>> 
>>> Yours sincerely,
>>> 
>>> 
>>> TAY Wee-Beng (Zheng Weiming) 郑伟明
>>> Personal research webpage: http://tayweebeng.wixsite.com/website
>>> Youtube research showcase: 
>>> https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
>>> linkedin: www.linkedin.com/in/tay-weebeng
>>> 
>>> 
>>> On 20/2/2018 10:46 PM, Jose E. Roman wrote:
 Probably the first error is produced by using a variable (mpi_comm) with 
 the same name as an MPI type.
 
 The second error I guess is due to variable 

Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread TAY wee-beng


On 21/2/2018 10:47 AM, Smith, Barry F. wrote:

   Try setting

   u_global = tVec(1)

   immediately before the call to DMCreateGlobalVector()



Hi,

I added the line in but still got the same error below. Btw, my code is 
organised as:


module global_data

#include "petsc/finclude/petsc.h"
use petsc
use kdtree2_module
implicit none
save
...
Vec u_local,u_global ...
...
contains

subroutine allo_var
...
u_global = tVec(1)
call DMCreateGlobalVector(da_u,u_global,ierr)
...




[0]PETSC ERROR: - Error Message 


--
[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Null Object: Parameter # 2
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trou

ble shooting.
[0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
[0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a 
petsc-3.8.

3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018
[0]PETSC ERROR: Configure options --with-cc="win32fe icl" 
--with-fc="win32fe ifo
rt" --with-cxx="win32fe icl" --download-fblaslapack 
--with-mpi-include="[/cygdri
ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program 
Files (x
86)/Microsoft SDKs/MPI/Include/x64]" 
--with-mpi-mpiexec="/cygdrive/c/Program Fil
es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 
--with-file-create-pause=1
--prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 
--with-mpi-lib="[/c
ygdrive/c/Program Files (x86)/Microsoft 
SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/
c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" 
--with-shared-libra

ries=0
[0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in 
C:\Source\PETSC-~2.3\

src\vec\vec\INTERF~1\vector.c
[0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in 
C:\Source\PETSC-~2.3\src

\dm\impls\da\dadist.c
[0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in 
C:\Source\PETSC-~2.3\src\d

m\INTERF~1\dm.c

Thanks.

On Feb 20, 2018, at 6:40 PM, TAY wee-beng  wrote:

Hi,

Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug 
step by step. I got into problem when calling:

call DMCreateGlobalVector(da_u,u_global,ierr)

The error is:

[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Null Object: Parameter # 2
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou
ble shooting.
[0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
[0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8.
3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018
[0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo
rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri
ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files 
(x

But all I changed is from:

module global_data
#include "petsc/finclude/petsc.h"
use petsc
use kdtree2_module
implicit none
save
!grid variables

integer :: size_x,s

...

to

module global_data
use kdtree2_module
implicit none
save
#include "petsc/finclude/petsc.h90"
!grid variables
integer :: size_x,s...

...

da_u, u_global were declared thru:

DM  da_u,da_v,...
DM  da_cu_types ...
Vec u_local,u_global,v_local...

So what could be the problem?


Thank you very much.

Yours sincerely,


TAY Wee-Beng (Zheng Weiming) 郑伟明
Personal research webpage: http://tayweebeng.wixsite.com/website
Youtube research showcase: 
https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
linkedin: www.linkedin.com/in/tay-weebeng


On 20/2/2018 10:46 PM, Jose E. Roman wrote:

Probably the first error is produced by using a variable (mpi_comm) with the 
same name as an MPI type.

The second error I guess is due to variable tvec, since a Fortran type tVec is 
now being defined in src/vec/f90-mod/petscvec.h

Jose



El 20 feb 2018, a las 15:35, Smith, Barry F.  escribió:


   Please run a clean compile of everything and cut and paste all the output. 
This will make it much easier to debug than trying to understand your snippets 
of what is going wrong.


On Feb 20, 2018, at 1:56 AM, TAY Wee Beng  wrote:

Hi,

I was previously using PETSc 3.7.6 on different clusters with both Intel
Fortran and GNU Fortran. After upgrading, I met some problems when
trying to compile:

On Intel Fortran:

Previously, I was using:

#include "petsc/finclude/petsc.h90"

in *.F90 when requires the use of PETSc

I read in the change log that h90 is no longer there and so I replaced
with #include "petsc/finclude/petsc.h"

It worked. But I also have some *.F90 which do not use PETSc. However,
they use some modules which uses 

Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread Smith, Barry F.

  Try setting

  u_global = tVec(1)

  immediately before the call to DMCreateGlobalVector()



> On Feb 20, 2018, at 6:40 PM, TAY wee-beng  wrote:
> 
> Hi,
> 
> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug 
> step by step. I got into problem when calling:
> 
> call DMCreateGlobalVector(da_u,u_global,ierr)
> 
> The error is:
> 
> [0]PETSC ERROR: - Error Message 
> 
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Object: Parameter # 2
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trou
> ble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a 
> petsc-3.8.
> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018
> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe 
> ifo
> rt" --with-cxx="win32fe icl" --download-fblaslapack 
> --with-mpi-include="[/cygdri
> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files 
> (x
> 
> But all I changed is from:
> 
> module global_data
> #include "petsc/finclude/petsc.h"
> use petsc
> use kdtree2_module
> implicit none
> save
> !grid variables
> 
> integer :: size_x,s
> 
> ...
> 
> to
> 
> module global_data
> use kdtree2_module
> implicit none
> save
> #include "petsc/finclude/petsc.h90"
> !grid variables
> integer :: size_x,s...
> 
> ...
> 
> da_u, u_global were declared thru:
> 
> DM  da_u,da_v,...
> DM  da_cu_types ...
> Vec u_local,u_global,v_local...
> 
> So what could be the problem?
> 
> 
> Thank you very much.
> 
> Yours sincerely,
> 
> 
> TAY Wee-Beng (Zheng Weiming) 郑伟明
> Personal research webpage: http://tayweebeng.wixsite.com/website
> Youtube research showcase: 
> https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
> linkedin: www.linkedin.com/in/tay-weebeng
> 
> 
> On 20/2/2018 10:46 PM, Jose E. Roman wrote:
>> Probably the first error is produced by using a variable (mpi_comm) with the 
>> same name as an MPI type.
>> 
>> The second error I guess is due to variable tvec, since a Fortran type tVec 
>> is now being defined in src/vec/f90-mod/petscvec.h
>> 
>> Jose
>> 
>> 
>>> El 20 feb 2018, a las 15:35, Smith, Barry F.  escribió:
>>> 
>>> 
>>>   Please run a clean compile of everything and cut and paste all the 
>>> output. This will make it much easier to debug than trying to understand 
>>> your snippets of what is going wrong.
>>> 
 On Feb 20, 2018, at 1:56 AM, TAY Wee Beng  wrote:
 
 Hi,
 
 I was previously using PETSc 3.7.6 on different clusters with both Intel
 Fortran and GNU Fortran. After upgrading, I met some problems when
 trying to compile:
 
 On Intel Fortran:
 
 Previously, I was using:
 
 #include "petsc/finclude/petsc.h90"
 
 in *.F90 when requires the use of PETSc
 
 I read in the change log that h90 is no longer there and so I replaced
 with #include "petsc/finclude/petsc.h"
 
 It worked. But I also have some *.F90 which do not use PETSc. However,
 they use some modules which uses PETSc.
 
 Now I can't compile them. The error is :
 
 math_routine.f90(3): error #7002: Error in opening the compiled module
 file.  Check INCLUDE paths.   [PETSC]
 use mpi_subroutines
 
 mpi_subroutines is a module which uses PETSc, and it compiled w/o problem.
 
 The solution is that I have to compile e.g.  math_routine.F90 as if they
 use PETSc, by including PETSc include and lib files.
 
 May I know why this is so? It was not necessary before.
 
 Anyway, it managed to compile until it reached hypre.F90.
 
 Previously, due to some bugs, I have to compile hypre with the -r8
 option. Also, I have to use:
 
 integer(8) mpi_comm
 
 mpi_comm = MPI_COMM_WORLD
 
 to make my codes work with HYPRE.
 
 But now, compiling gives the error:
 
 hypre.F90(11): error #6401: The attributes of this name conflict with
 those made accessible by a USE statement.   [MPI_COMM]
 integer(8) mpi_comm
 --^
 hypre.F90(84): error #6478: A type-name must not be used as a
 variable.   [MPI_COMM]
   mpi_comm = MPI_COMM_WORLD
 ^
 hypre.F90(84): error #6303: The assignment operation or the binary
 expression operation is invalid for the data types of the two
 operands.   [1140850688]
   mpi_comm = MPI_COMM_WORLD
 ---^
 hypre.F90(100): error #6478: A type-name must not be used as a
 variable.   [MPI_COMM]
   call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr)
 ...

Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread TAY wee-beng

Hi,

Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to 
debug step by step. I got into problem when calling:


call DMCreateGlobalVector(da_u,u_global,ierr)

The error is:

[0]PETSC ERROR: - Error Message 


--
[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Null Object: Parameter # 2
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trou

ble shooting.
[0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017
[0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a 
petsc-3.8.

3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018
[0]PETSC ERROR: Configure options --with-cc="win32fe icl" 
--with-fc="win32fe ifo
rt" --with-cxx="win32fe icl" --download-fblaslapack 
--with-mpi-include="[/cygdri
ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program 
Files (x


But all I changed is from:

module global_data
#include "petsc/finclude/petsc.h"
use petsc
use kdtree2_module
implicit none
save
!grid variables

integer :: size_x,s

...

to

module global_data
use kdtree2_module
implicit none
save
#include "petsc/finclude/petsc.h90"
!grid variables
integer :: size_x,s...

...

da_u, u_global were declared thru:

DM  da_u,da_v,...
DM  da_cu_types ...
Vec u_local,u_global,v_local...

So what could be the problem?


Thank you very much.

Yours sincerely,


TAY Wee-Beng (Zheng Weiming) 郑伟明
Personal research webpage: http://tayweebeng.wixsite.com/website
Youtube research showcase: 
https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
linkedin: www.linkedin.com/in/tay-weebeng


On 20/2/2018 10:46 PM, Jose E. Roman wrote:

Probably the first error is produced by using a variable (mpi_comm) with the 
same name as an MPI type.

The second error I guess is due to variable tvec, since a Fortran type tVec is 
now being defined in src/vec/f90-mod/petscvec.h

Jose



El 20 feb 2018, a las 15:35, Smith, Barry F.  escribió:


   Please run a clean compile of everything and cut and paste all the output. 
This will make it much easier to debug than trying to understand your snippets 
of what is going wrong.


On Feb 20, 2018, at 1:56 AM, TAY Wee Beng  wrote:

Hi,

I was previously using PETSc 3.7.6 on different clusters with both Intel
Fortran and GNU Fortran. After upgrading, I met some problems when
trying to compile:

On Intel Fortran:

Previously, I was using:

#include "petsc/finclude/petsc.h90"

in *.F90 when requires the use of PETSc

I read in the change log that h90 is no longer there and so I replaced
with #include "petsc/finclude/petsc.h"

It worked. But I also have some *.F90 which do not use PETSc. However,
they use some modules which uses PETSc.

Now I can't compile them. The error is :

math_routine.f90(3): error #7002: Error in opening the compiled module
file.  Check INCLUDE paths.   [PETSC]
use mpi_subroutines

mpi_subroutines is a module which uses PETSc, and it compiled w/o problem.

The solution is that I have to compile e.g.  math_routine.F90 as if they
use PETSc, by including PETSc include and lib files.

May I know why this is so? It was not necessary before.

Anyway, it managed to compile until it reached hypre.F90.

Previously, due to some bugs, I have to compile hypre with the -r8
option. Also, I have to use:

integer(8) mpi_comm

mpi_comm = MPI_COMM_WORLD

to make my codes work with HYPRE.

But now, compiling gives the error:

hypre.F90(11): error #6401: The attributes of this name conflict with
those made accessible by a USE statement.   [MPI_COMM]
integer(8) mpi_comm
--^
hypre.F90(84): error #6478: A type-name must not be used as a
variable.   [MPI_COMM]
   mpi_comm = MPI_COMM_WORLD
^
hypre.F90(84): error #6303: The assignment operation or the binary
expression operation is invalid for the data types of the two
operands.   [1140850688]
   mpi_comm = MPI_COMM_WORLD
---^
hypre.F90(100): error #6478: A type-name must not be used as a
variable.   [MPI_COMM]
   call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr)
...

What's actually happening? Why can't I compile now?

On GNU gfortran:

I tried to use similar tactics as above here. However, when compiling
math_routine.F90, I got the error:

math_routine.F90:1333:21:

call subb(orig,vert1,tvec)
1
Error: Invalid procedure argument at (1)
math_routine.F90:1339:18:

qvec = cross_pdt2(tvec,edge1)
 1
Error: Invalid procedure argument at (1)
math_routine.F90:1345:21:

uu = dot_product(tvec,pvec)
1
Error: ‘vector_a’ argument of ‘dot_product’ intrinsic at (1) must be
numeric or LOGICAL
math_routine.F90:1371:21:

uu = dot_product(tvec,pvec)

These errors were not present before. My variables 

Re: [petsc-users] Compiling with PETSc 64-bit indices

2018-02-20 Thread Matthew Knepley
On Tue, Feb 20, 2018 at 8:08 PM, TAY wee-beng  wrote:

>
> On 21/2/2018 9:00 AM, Matthew Knepley wrote:
>
> On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng  wrote:
>
>> Hi,
>>
>> When I run my CFD code with a grid size of 1119x1119x499 ( total grid
>> size =624828339 ), I got the error saying I need to compile PETSc with
>> 64-bit indices.
>>
>> So I tried to compile PETSc again and then compile my CFD code with the
>> newly compiled PETSc. However, now I got segmentation error:
>>
>> rm: cannot remove `log': No such file or directory
>> [409]PETSC ERROR: --
>> --
>> [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR:
>> 
>> [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>> probably memory access out of range
>> [410]PETSC ERROR: Try option -start_in_debugger or
>> -on_error_attach_debugger
>> [410]PETSC ERROR: [536]PETSC ERROR: --
>> --
>> [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>> probably memory access out of range
>> [536]PETSC ERROR: Try option -start_in_debugger or
>> -on_error_attach_debugger
>> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d
>> ocumentation/faq.html#valgrind
>> [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
>> OS X to find memory corruption errors
>> [536]PETSC ERROR: likely location of problem given in stack below
>> [536]PETSC ERROR: -  Stack Frames
>> 
>> [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>> available,
>> [536]PETSC ERROR:   INSTEAD the line number of the start of the
>> function
>> [536]PETSC ERROR:   is given.
>> [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581
>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
>> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d
>> ocumentation/faq.html#valgrind
>> [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
>> OS X to find memory corruption errors
>> [410]PETSC ERROR: likely location of problem given in stack below
>> [410]PETSC ERROR: -  Stack Frames
>> 
>> [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>> available,
>> [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613
>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
>> [536]PETSC ERROR: [536] DMDACreate3d line 1434
>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c
>> [536]PETSC ERROR: - Error Message
>> --
>>
>> The CFD code worked previously but increasing the problem size results in
>> segmentation error. It seems to be related to DMDACreate3d and
>> DMDASetOwnershipRanges. Any idea where the problem lies?
>>
>> Besides, I want to know when and why do I have to use PETSc with 64-bit
>> indices?
>>
>
> 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you have a
> 3D velocity, pressure, and energy, you already have 3e9 unknowns,
> before you even start to count nonzero entries in the matrix. 64-bit
> integers allow you to handle these big sizes.
>
>
>> Also, can I use the 64-bit indices version with smaller sized problems?
>>
>
> 2) Yes
>
>
>> And is there a speed difference between using the 32-bit and 64-bit
>> indices ver?
>
>
> 3) I have seen no evidence of this
>
> 4) My guess is that you have defines regular integers in your code and
> passed them to PETSc, rather than using PetscInt as the type.
>
> Oh that seems probable. So I am still using integer(4) when it should be
> integer(8) for some values, is that so? If I use PetscInt, is it the same
> as integer(8)? Or does it depend on the actual number?
>

PetscInt will be integer(4) if you configure with 32-bit ints, and
integer(8) if you configure with 64-bit ints. If you use it consistently,
you can avoid problems
with matching the PETSc API.

I wonder if I replace all my integer to PetscInt, will there be a large
> increase in memory usage, because all integer(4) now becomes integer(8)?
>

Only if you have large integer storage. Most codes do not.

  Thanks,

Matt


> Thanks.
>
>
>   Thanks,
>
>  Matt
>
>
>>
>> --
>> Thank you very much.
>>
>> Yours sincerely,
>>
>> 
>> TAY Wee-Beng (Zheng Weiming) 郑伟明
>> Personal research webpage: http://tayweebeng.wixsite.com/website
>> Youtube research showcase: https://www.youtube.com/channe
>> l/UC72ZHtvQNMpNs2uRTSToiLA
>> linkedin: www.linkedin.com/in/tay-weebeng
>> 
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more 

Re: [petsc-users] Compiling with PETSc 64-bit indices

2018-02-20 Thread TAY wee-beng


On 21/2/2018 9:00 AM, Matthew Knepley wrote:
On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng > wrote:


Hi,

When I run my CFD code with a grid size of 1119x1119x499 ( total
grid size =    624828339 ), I got the error saying I need to
compile PETSc with 64-bit indices.

So I tried to compile PETSc again and then compile my CFD code
with the newly compiled PETSc. However, now I got segmentation error:

rm: cannot remove `log': No such file or directory
[409]PETSC ERROR:

[409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR:

[410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation
Violation, probably memory access out of range
[410]PETSC ERROR: Try option -start_in_debugger or
-on_error_attach_debugger
[410]PETSC ERROR: [536]PETSC ERROR:

[536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation
Violation, probably memory access out of range
[536]PETSC ERROR: Try option -start_in_debugger or
-on_error_attach_debugger
[536]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind

[536]PETSC ERROR: or try http://valgrind.org on GNU/linux and
Apple Mac OS X to find memory corruption errors
[536]PETSC ERROR: likely location of problem given in stack below
[536]PETSC ERROR: -  Stack Frames

[536]PETSC ERROR: Note: The EXACT line numbers in the stack are
not available,
[536]PETSC ERROR:   INSTEAD the line number of the start of
the function
[536]PETSC ERROR:   is given.
[536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581
/home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
[536]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind

[410]PETSC ERROR: or try http://valgrind.org on GNU/linux and
Apple Mac OS X to find memory corruption errors
[410]PETSC ERROR: likely location of problem given in stack below
[410]PETSC ERROR: -  Stack Frames

[410]PETSC ERROR: Note: The EXACT line numbers in the stack are
not available,
[897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613
/home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
[536]PETSC ERROR: [536] DMDACreate3d line 1434
/home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c
[536]PETSC ERROR: - Error Message
--

The CFD code worked previously but increasing the problem size
results in segmentation error. It seems to be related to
DMDACreate3d and DMDASetOwnershipRanges. Any idea where the
problem lies?

Besides, I want to know when and why do I have to use PETSc with
64-bit indices?


1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you 
have a 3D velocity, pressure, and energy, you already have 3e9 unknowns,
    before you even start to count nonzero entries in the matrix. 
64-bit integers allow you to handle these big sizes.


Also, can I use the 64-bit indices version with smaller sized
problems?


2) Yes

And is there a speed difference between using the 32-bit and
64-bit indices ver?


3) I have seen no evidence of this

4) My guess is that you have defines regular integers in your code and 
passed them to PETSc, rather than using PetscInt as the type.
Oh that seems probable. So I am still using integer(4) when it should be 
integer(8) for some values, is that so? If I use PetscInt, is it the 
same as integer(8)? Or does it depend on the actual number?


I wonder if I replace all my integer to PetscInt, will there be a large 
increase in memory usage, because all integer(4) now becomes integer(8)?


Thanks.


  Thanks,

     Matt


-- 
Thank you very much.


Yours sincerely,


TAY Wee-Beng (Zheng Weiming) 郑伟明
Personal research webpage: http://tayweebeng.wixsite.com/website

Youtube research showcase:
https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA

linkedin: www.linkedin.com/in/tay-weebeng






--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 

Re: [petsc-users] Compiling with PETSc 64-bit indices

2018-02-20 Thread Matthew Knepley
On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng  wrote:

> Hi,
>
> When I run my CFD code with a grid size of 1119x1119x499 ( total grid size
> =624828339 ), I got the error saying I need to compile PETSc with
> 64-bit indices.
>
> So I tried to compile PETSc again and then compile my CFD code with the
> newly compiled PETSc. However, now I got segmentation error:
>
> rm: cannot remove `log': No such file or directory
> [409]PETSC ERROR: --
> --
> [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR:
> 
> [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> [410]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> [410]PETSC ERROR: [536]PETSC ERROR: --
> --
> [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> [536]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d
> ocumentation/faq.html#valgrind
> [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
> OS X to find memory corruption errors
> [536]PETSC ERROR: likely location of problem given in stack below
> [536]PETSC ERROR: -  Stack Frames
> 
> [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not
> available,
> [536]PETSC ERROR:   INSTEAD the line number of the start of the
> function
> [536]PETSC ERROR:   is given.
> [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581
> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d
> ocumentation/faq.html#valgrind
> [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac
> OS X to find memory corruption errors
> [410]PETSC ERROR: likely location of problem given in stack below
> [410]PETSC ERROR: -  Stack Frames
> 
> [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not
> available,
> [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613
> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
> [536]PETSC ERROR: [536] DMDACreate3d line 1434
> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c
> [536]PETSC ERROR: - Error Message
> --
>
> The CFD code worked previously but increasing the problem size results in
> segmentation error. It seems to be related to DMDACreate3d and
> DMDASetOwnershipRanges. Any idea where the problem lies?
>
> Besides, I want to know when and why do I have to use PETSc with 64-bit
> indices?
>

1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you have a
3D velocity, pressure, and energy, you already have 3e9 unknowns,
before you even start to count nonzero entries in the matrix. 64-bit
integers allow you to handle these big sizes.


> Also, can I use the 64-bit indices version with smaller sized problems?
>

2) Yes


> And is there a speed difference between using the 32-bit and 64-bit
> indices ver?


3) I have seen no evidence of this

4) My guess is that you have defines regular integers in your code and
passed them to PETSc, rather than using PetscInt as the type.

  Thanks,

 Matt


>
> --
> Thank you very much.
>
> Yours sincerely,
>
> 
> TAY Wee-Beng (Zheng Weiming) 郑伟明
> Personal research webpage: http://tayweebeng.wixsite.com/website
> Youtube research showcase: https://www.youtube.com/channe
> l/UC72ZHtvQNMpNs2uRTSToiLA
> linkedin: www.linkedin.com/in/tay-weebeng
> 
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Compiling with PETSc 64-bit indices

2018-02-20 Thread TAY wee-beng

Hi,

When I run my CFD code with a grid size of 1119x1119x499 ( total grid 
size =    624828339 ), I got the error saying I need to compile PETSc 
with 64-bit indices.


So I tried to compile PETSc again and then compile my CFD code with the 
newly compiled PETSc. However, now I got segmentation error:


rm: cannot remove `log': No such file or directory
[409]PETSC ERROR: 

[409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: 

[410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
probably memory access out of range

[410]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[410]PETSC ERROR: [536]PETSC ERROR: 

[536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
probably memory access out of range

[536]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[536]PETSC ERROR: or see 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac 
OS X to find memory corruption errors

[536]PETSC ERROR: likely location of problem given in stack below
[536]PETSC ERROR: -  Stack Frames 

[536]PETSC ERROR: Note: The EXACT line numbers in the stack are not 
available,

[536]PETSC ERROR:   INSTEAD the line number of the start of the function
[536]PETSC ERROR:   is given.
[536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581 
/home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
[536]PETSC ERROR: or see 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac 
OS X to find memory corruption errors

[410]PETSC ERROR: likely location of problem given in stack below
[410]PETSC ERROR: -  Stack Frames 

[410]PETSC ERROR: Note: The EXACT line numbers in the stack are not 
available,
[897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 
/home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c
[536]PETSC ERROR: [536] DMDACreate3d line 1434 
/home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c
[536]PETSC ERROR: - Error Message 
--


The CFD code worked previously but increasing the problem size results 
in segmentation error. It seems to be related to DMDACreate3d and 
DMDASetOwnershipRanges. Any idea where the problem lies?


Besides, I want to know when and why do I have to use PETSc with 64-bit 
indices?


Also, can I use the 64-bit indices version with smaller sized problems?

And is there a speed difference between using the 32-bit and 64-bit 
indices ver?


--
Thank you very much.

Yours sincerely,


TAY Wee-Beng (Zheng Weiming) 郑伟明
Personal research webpage: http://tayweebeng.wixsite.com/website
Youtube research showcase: 
https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
linkedin: www.linkedin.com/in/tay-weebeng




Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-20 Thread Danyang Su

On 18-02-20 09:52 AM, Matthew Knepley wrote:
On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su > wrote:


Hi All,

I tried to compile the DMPlexCreateSection code but got error
information as shown below.

Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then
the code can be compiled but run into Segmentation Violation error
in DMPlexCreateSection.

From the webpage

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html 



The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays 
would have been too painful.

Hi Matt,

Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 
instead of DMPlexCreateSection. Would you please tell me in more details?


undefined reference to `dmplexcreatesectionf90_'

then I #include , but this throws more 
error during compilation.



    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:



  Thanks,

     Matt

dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is

http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90

.


What parameters should I use if passing null to bcField, bcComps,
bcPoints and perm.

PetscErrorCode


  DMPlexCreateSection

(DM
 
 dm,PetscInt


  dim,PetscInt


  numFields,constPetscInt


  numComp[],constPetscInt


  numDof[],PetscInt


  numBC,constPetscInt


  bcField[],
constIS
 
 bcComps[], constIS
 
 bcPoints[],IS
 
 perm,PetscSection


  *section)

#include 
#include 
#include 

...

#ifdef USG
    numFields = 1
    numComp(1) = 1
    pNumComp => numComp

    do i = 1, numFields*(dmda_flow%dim+1)
  numDof(i) = 0
    end do
    numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof
    pNumDof => numDof

    numBC = 0

    call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, &
numFields,pNumComp,pNumDof, &
numBC,PETSC_NULL_INTEGER, &
PETSC_NULL_IS,PETSC_NULL_IS, & !Error here
PETSC_NULL_IS,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionSetFieldName(section,0,'flow',ierr)
    CHKERRQ(ierr)

    call DMSetDefaultSection(dmda_flow%da,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionDestroy(section,ierr)
    CHKERRQ(ierr)
#endif

Thanks,

Danyang




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 




Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-20 Thread Matthew Knepley
On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su  wrote:

> Hi All,
>
> I tried to compile the DMPlexCreateSection code but got error information
> as shown below.
>
> Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type
>
> I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code
> can be compiled but run into Segmentation Violation error in
> DMPlexCreateSection.
>
>From the webpage


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html


The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays would
have been too painful.

  Thanks,

 Matt

> dmda_flow%da is distributed dm object that works fine.
>
> The fortran example I follow is http://www.mcs.anl.gov/petsc/
> petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90.
>
> What parameters should I use if passing null to bcField, bcComps, bcPoints
> and perm.
>
> PetscErrorCode 
> 
>  DMPlexCreateSection 
> (DM
>   
> dm, PetscInt 
> 
>  dim, PetscInt 
> 
>  numFields,const PetscInt 
> 
>  numComp[],const PetscInt 
> 
>  numDof[], PetscInt 
> 
>  numBC,const PetscInt 
> 
>  bcField[],
> const IS 
>  
> bcComps[], const IS 
>  
> bcPoints[], IS 
>  
> perm, PetscSection 
> 
>  *section)
>
>
> #include 
> #include 
> #include 
>
> ...
>
> #ifdef USG
> numFields = 1
> numComp(1) = 1
> pNumComp => numComp
>
> do i = 1, numFields*(dmda_flow%dim+1)
>   numDof(i) = 0
> end do
> numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof
> pNumDof => numDof
>
> numBC = 0
>
> call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,  &
>  numFields,pNumComp,pNumDof,
> &
>  numBC,PETSC_NULL_INTEGER,
> &
>  PETSC_NULL_IS,PETSC_NULL_IS,
> & !Error here
>  PETSC_NULL_IS,section,ierr)
> CHKERRQ(ierr)
>
> call PetscSectionSetFieldName(section,0,'flow',ierr)
> CHKERRQ(ierr)
>
> call DMSetDefaultSection(dmda_flow%da,section,ierr)
> CHKERRQ(ierr)
>
> call PetscSectionDestroy(section,ierr)
> CHKERRQ(ierr)
> #endif
>
> Thanks,
>
> Danyang
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-20 Thread Danyang Su

Hi All,

I tried to compile the DMPlexCreateSection code but got error 
information as shown below.


Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code 
can be compiled but run into Segmentation Violation error in 
DMPlexCreateSection.


dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is 
http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90. 



What parameters should I use if passing null to bcField, bcComps, 
bcPoints and perm.


PetscErrorCode 
  DMPlexCreateSection 
(DM 
  dm,PetscInt 
  dim,PetscInt 
  numFields,constPetscInt 
  numComp[],constPetscInt 
  numDof[],PetscInt 
  numBC,constPetscInt 
  bcField[],
constIS 
  bcComps[], constIS 
  bcPoints[],IS 
  perm,PetscSection 
  *section)


#include 
#include 
#include 

...

#ifdef USG
    numFields = 1
    numComp(1) = 1
    pNumComp => numComp

    do i = 1, numFields*(dmda_flow%dim+1)
  numDof(i) = 0
    end do
    numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof
    pNumDof => numDof

    numBC = 0

    call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,  &
numFields,pNumComp,pNumDof,   &
numBC,PETSC_NULL_INTEGER,   &
PETSC_NULL_IS,PETSC_NULL_IS, & !Error here
 PETSC_NULL_IS,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionSetFieldName(section,0,'flow',ierr)
    CHKERRQ(ierr)

    call DMSetDefaultSection(dmda_flow%da,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionDestroy(section,ierr)
    CHKERRQ(ierr)
#endif

Thanks,

Danyang



Re: [petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Satish Balay
[sorry - its --download-fblaslapack]

What it does is:

1. gets the source tarball specified in
 config/BuildSystem/config/packages/fblaslapack.py i.e
 http://ftp.mcs.anl.gov/pub/petsc/externalpackages/fblaslapack-3.4.2.tar.gz

2. compiles it

3. installs it in PETSC_DIR/PETSC_ARCH/lib

4. Then configures PETSc to use this library

Note: PETSc libraries also get installed in PETSC_DIR/PETSC_ARCH/lib

Also check installation instructions at
http://www.mcs.anl.gov/petsc/documentation/installation.html

Satish

On Tue, 20 Feb 2018, Ali Berk Kahraman wrote:

> When I call --download-blaslapack, what does it do exactly? Where does it
> install the library? Does it touch anything anything else (such as updating
> versions of mpicc) ? My concern is that if I call download-blaslapack I will
> "change" some stuff in the /usr/bin directory that might disable some other
> program, package installed on the computer.
> 
> 
> On 20-02-2018 19:34, Satish Balay wrote:
> > On Tue, 20 Feb 2018, Ali Berk Kahraman wrote:
> >
> >> Hello All,
> >>
> >> I have access to a common computer in my school, and I want to use petsc on
> >> it. The problem is that I do not have root access, and neither do I want
> >> it.
> >> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc
> >> somehow without having any BLAS commands? If not, can I install BLAS
> >> somehow
> >> only on my own folder (/home/myfolder) without touching anything inside
> >> /usr/
> >> folder?
> > You don't need root access to install/use PETSc.
> >
> > And you can ask petsc configure to install any required or missing packages.
> >
> > ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack
> > --download-mpich
> > make
> >
> > If you wish to install PETSc with a preinstalled mpi - you can do:
> >
> > ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack
> > make
> >
> > Satish
> 
> 
> 



Re: [petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Ali Berk Kahraman

Exactly the words I wanted to hear. Thank you very much.


On 20-02-2018 19:46, Smith, Barry F. wrote:



On Feb 20, 2018, at 10:41 AM, Ali Berk Kahraman  
wrote:

When I call --download-blaslapack, what does it do exactly? Where does it install the 
library? Does it touch anything anything else (such as updating versions of mpicc) ? My 
concern is that if I call download-blaslapack I will "change" some stuff in the 
/usr/bin directory that might disable some other program, package installed on the 
computer.

   It puts everything in your PETSC_DIR and does not, nor could not change 
anything in /usr

Barry



On 20-02-2018 19:34, Satish Balay wrote:

On Tue, 20 Feb 2018, Ali Berk Kahraman wrote:


Hello All,

I have access to a common computer in my school, and I want to use petsc on
it. The problem is that I do not have root access, and neither do I want it.
The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc
somehow without having any BLAS commands? If not, can I install BLAS somehow
only on my own folder (/home/myfolder) without touching anything inside /usr/
folder?

You don't need root access to install/use PETSc.

And you can ask petsc configure to install any required or missing packages.

./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich
make

If you wish to install PETSc with a preinstalled mpi - you can do:

./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack
make

Satish




Re: [petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Smith, Barry F.


> On Feb 20, 2018, at 10:41 AM, Ali Berk Kahraman  
> wrote:
> 
> When I call --download-blaslapack, what does it do exactly? Where does it 
> install the library? Does it touch anything anything else (such as updating 
> versions of mpicc) ? My concern is that if I call download-blaslapack I will 
> "change" some stuff in the /usr/bin directory that might disable some other 
> program, package installed on the computer.

  It puts everything in your PETSC_DIR and does not, nor could not change 
anything in /usr

   Barry

> 
> 
> On 20-02-2018 19:34, Satish Balay wrote:
>> On Tue, 20 Feb 2018, Ali Berk Kahraman wrote:
>> 
>>> Hello All,
>>> 
>>> I have access to a common computer in my school, and I want to use petsc on
>>> it. The problem is that I do not have root access, and neither do I want it.
>>> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc
>>> somehow without having any BLAS commands? If not, can I install BLAS somehow
>>> only on my own folder (/home/myfolder) without touching anything inside 
>>> /usr/
>>> folder?
>> You don't need root access to install/use PETSc.
>> 
>> And you can ask petsc configure to install any required or missing packages.
>> 
>> ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich
>> make
>> 
>> If you wish to install PETSc with a preinstalled mpi - you can do:
>> 
>> ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack
>> make
>> 
>> Satish
> 



Re: [petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Smith, Barry F.


> On Feb 20, 2018, at 10:30 AM, Ali Berk Kahraman  
> wrote:
> 
> Hello All,
> 
> I have access to a common computer in my school, and I want to use petsc on 
> it. The problem is that I do not have root access, and neither do I want it. 
> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc 
> somehow without having any BLAS commands? If not, can I install BLAS somehow 
> only on my own folder (/home/myfolder) without touching anything inside /usr/ 
> folder?

  Of course, PETSc NEVER requires you to have any kind of root access to 
install it and its packages. Just use --download-fblaslapack as a ./configure 
option. It will keep the BLASLAPACK inside the PETSc directory with the PETSc 
libraries.


  Barry

> 
> Best Regards,
> 
> Ali Berk Kahraman
> 
> 



Re: [petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Ali Berk Kahraman
When I call --download-blaslapack, what does it do exactly? Where does 
it install the library? Does it touch anything anything else (such as 
updating versions of mpicc) ? My concern is that if I call 
download-blaslapack I will "change" some stuff in the /usr/bin directory 
that might disable some other program, package installed on the computer.



On 20-02-2018 19:34, Satish Balay wrote:

On Tue, 20 Feb 2018, Ali Berk Kahraman wrote:


Hello All,

I have access to a common computer in my school, and I want to use petsc on
it. The problem is that I do not have root access, and neither do I want it.
The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc
somehow without having any BLAS commands? If not, can I install BLAS somehow
only on my own folder (/home/myfolder) without touching anything inside /usr/
folder?

You don't need root access to install/use PETSc.

And you can ask petsc configure to install any required or missing packages.

./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich
make

If you wish to install PETSc with a preinstalled mpi - you can do:

./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack
make

Satish




Re: [petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Satish Balay
On Tue, 20 Feb 2018, Satish Balay wrote:

> On Tue, 20 Feb 2018, Ali Berk Kahraman wrote:
> 
> > Hello All,
> > 
> > I have access to a common computer in my school, and I want to use petsc on
> > it. The problem is that I do not have root access, and neither do I want it.
> > The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc
> > somehow without having any BLAS commands? If not, can I install BLAS somehow
> > only on my own folder (/home/myfolder) without touching anything inside 
> > /usr/
> > folder?
> 
> You don't need root access to install/use PETSc.
> 
> And you can ask petsc configure to install any required or missing packages.
> 
> ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich
> make
> 
> If you wish to install PETSc with a preinstalled mpi - you can do:
> 
> ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack
> make


ops - that should be:

--download-fblaslapack

Satish


[petsc-users] Installation without BLAS/LAPACK

2018-02-20 Thread Ali Berk Kahraman

Hello All,

I have access to a common computer in my school, and I want to use petsc 
on it. The problem is that I do not have root access, and neither do I 
want it. The machine has OpenMPI installed in it, but no BLAS. Can I 
configure petsc somehow without having any BLAS commands? If not, can I 
install BLAS somehow only on my own folder (/home/myfolder) without 
touching anything inside /usr/ folder?


Best Regards,

Ali Berk Kahraman




[petsc-users] *****SPAM*****Re: how to check if cell is local owned in DMPlex

2018-02-20 Thread Matthew Knepley
On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su  wrote:

> Hi Matt,
>
> Would you please let me know how to check if a cell is local owned? When
> overlap is 0 in DMPlexDistribute, all the cells are local owned. How about
> overlap > 0? It sounds like impossible to check by node because a cell can
> be local owned even if none of the nodes in this cell is local owned.
>

If a cell is in the PetscSF, then it is not locally owned. The local nodes
in the SF are sorted, so I use
PetscFindInt (
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html
).

  Thanks,

Matt


> Thanks,
>
> Danyang
>
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread Jose E. Roman
Probably the first error is produced by using a variable (mpi_comm) with the 
same name as an MPI type.

The second error I guess is due to variable tvec, since a Fortran type tVec is 
now being defined in src/vec/f90-mod/petscvec.h

Jose 


> El 20 feb 2018, a las 15:35, Smith, Barry F.  escribió:
> 
> 
>   Please run a clean compile of everything and cut and paste all the output. 
> This will make it much easier to debug than trying to understand your 
> snippets of what is going wrong. 
> 
>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng  wrote:
>> 
>> Hi,
>> 
>> I was previously using PETSc 3.7.6 on different clusters with both Intel
>> Fortran and GNU Fortran. After upgrading, I met some problems when
>> trying to compile:
>> 
>> On Intel Fortran:
>> 
>> Previously, I was using:
>> 
>> #include "petsc/finclude/petsc.h90"
>> 
>> in *.F90 when requires the use of PETSc
>> 
>> I read in the change log that h90 is no longer there and so I replaced
>> with #include "petsc/finclude/petsc.h"
>> 
>> It worked. But I also have some *.F90 which do not use PETSc. However,
>> they use some modules which uses PETSc.
>> 
>> Now I can't compile them. The error is :
>> 
>> math_routine.f90(3): error #7002: Error in opening the compiled module
>> file.  Check INCLUDE paths.   [PETSC]
>> use mpi_subroutines
>> 
>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem.
>> 
>> The solution is that I have to compile e.g.  math_routine.F90 as if they
>> use PETSc, by including PETSc include and lib files.
>> 
>> May I know why this is so? It was not necessary before.
>> 
>> Anyway, it managed to compile until it reached hypre.F90.
>> 
>> Previously, due to some bugs, I have to compile hypre with the -r8
>> option. Also, I have to use:
>> 
>> integer(8) mpi_comm
>> 
>> mpi_comm = MPI_COMM_WORLD
>> 
>> to make my codes work with HYPRE.
>> 
>> But now, compiling gives the error:
>> 
>> hypre.F90(11): error #6401: The attributes of this name conflict with
>> those made accessible by a USE statement.   [MPI_COMM]
>> integer(8) mpi_comm
>> --^
>> hypre.F90(84): error #6478: A type-name must not be used as a
>> variable.   [MPI_COMM]
>>   mpi_comm = MPI_COMM_WORLD
>> ^
>> hypre.F90(84): error #6303: The assignment operation or the binary
>> expression operation is invalid for the data types of the two
>> operands.   [1140850688]
>>   mpi_comm = MPI_COMM_WORLD
>> ---^
>> hypre.F90(100): error #6478: A type-name must not be used as a
>> variable.   [MPI_COMM]
>>   call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr)
>> ...
>> 
>> What's actually happening? Why can't I compile now?
>> 
>> On GNU gfortran:
>> 
>> I tried to use similar tactics as above here. However, when compiling
>> math_routine.F90, I got the error:
>> 
>> math_routine.F90:1333:21:
>> 
>> call subb(orig,vert1,tvec)
>>1
>> Error: Invalid procedure argument at (1)
>> math_routine.F90:1339:18:
>> 
>> qvec = cross_pdt2(tvec,edge1)
>> 1
>> Error: Invalid procedure argument at (1)
>> math_routine.F90:1345:21:
>> 
>>uu = dot_product(tvec,pvec)
>>1
>> Error: ‘vector_a’ argument of ‘dot_product’ intrinsic at (1) must be
>> numeric or LOGICAL
>> math_routine.F90:1371:21:
>> 
>>uu = dot_product(tvec,pvec)
>> 
>> These errors were not present before. My variables are mostly vectors:
>> 
>> real(8), intent(in) ::
>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3)
>> 
>> real(8) :: uu,vv,dir(3)
>> 
>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t
>> 
>> I wonder what happened?
>> 
>> Please advice.
>> 
>> 
>> --
>> Thank you very much.
>> 
>> Yours sincerely,
>> 
>> 
>> TAY Wee-Beng 郑伟明
>> Research Scientist
>> Experimental AeroScience Group
>> Temasek Laboratories
>> National University of Singapore
>> T-Lab Building
>> 5A, Engineering Drive 1, #02-02
>> Singapore 117411
>> Phone: +65 65167330
>> E-mail: tslta...@nus.edu.sg
>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php
>> Personal research webpage: http://tayweebeng.wixsite.com/website
>> Youtube research showcase: 
>> https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
>> linkedin: www.linkedin.com/in/tay-weebeng
>> 
>> 
>> 
>> 
>> 
>> Important: This email is confidential and may be privileged. If you are not 
>> the intended recipient, please delete it and notify us immediately; you 
>> should not copy or use it for any purpose, nor disclose its contents to any 
>> other person. Thank you.
> 



Re: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread Smith, Barry F.

   Please run a clean compile of everything and cut and paste all the output. 
This will make it much easier to debug than trying to understand your snippets 
of what is going wrong. 

> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng  wrote:
> 
> Hi,
> 
> I was previously using PETSc 3.7.6 on different clusters with both Intel
> Fortran and GNU Fortran. After upgrading, I met some problems when
> trying to compile:
> 
> On Intel Fortran:
> 
> Previously, I was using:
> 
> #include "petsc/finclude/petsc.h90"
> 
> in *.F90 when requires the use of PETSc
> 
> I read in the change log that h90 is no longer there and so I replaced
> with #include "petsc/finclude/petsc.h"
> 
> It worked. But I also have some *.F90 which do not use PETSc. However,
> they use some modules which uses PETSc.
> 
> Now I can't compile them. The error is :
> 
> math_routine.f90(3): error #7002: Error in opening the compiled module
> file.  Check INCLUDE paths.   [PETSC]
> use mpi_subroutines
> 
> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem.
> 
> The solution is that I have to compile e.g.  math_routine.F90 as if they
> use PETSc, by including PETSc include and lib files.
> 
> May I know why this is so? It was not necessary before.
> 
> Anyway, it managed to compile until it reached hypre.F90.
> 
> Previously, due to some bugs, I have to compile hypre with the -r8
> option. Also, I have to use:
> 
> integer(8) mpi_comm
> 
> mpi_comm = MPI_COMM_WORLD
> 
> to make my codes work with HYPRE.
> 
> But now, compiling gives the error:
> 
> hypre.F90(11): error #6401: The attributes of this name conflict with
> those made accessible by a USE statement.   [MPI_COMM]
> integer(8) mpi_comm
> --^
> hypre.F90(84): error #6478: A type-name must not be used as a
> variable.   [MPI_COMM]
>mpi_comm = MPI_COMM_WORLD
> ^
> hypre.F90(84): error #6303: The assignment operation or the binary
> expression operation is invalid for the data types of the two
> operands.   [1140850688]
>mpi_comm = MPI_COMM_WORLD
> ---^
> hypre.F90(100): error #6478: A type-name must not be used as a
> variable.   [MPI_COMM]
>call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr)
> ...
> 
> What's actually happening? Why can't I compile now?
> 
> On GNU gfortran:
> 
> I tried to use similar tactics as above here. However, when compiling
> math_routine.F90, I got the error:
> 
> math_routine.F90:1333:21:
> 
> call subb(orig,vert1,tvec)
> 1
> Error: Invalid procedure argument at (1)
> math_routine.F90:1339:18:
> 
> qvec = cross_pdt2(tvec,edge1)
>  1
> Error: Invalid procedure argument at (1)
> math_routine.F90:1345:21:
> 
> uu = dot_product(tvec,pvec)
> 1
> Error: ‘vector_a’ argument of ‘dot_product’ intrinsic at (1) must be
> numeric or LOGICAL
> math_routine.F90:1371:21:
> 
> uu = dot_product(tvec,pvec)
> 
> These errors were not present before. My variables are mostly vectors:
> 
> real(8), intent(in) ::
> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3)
> 
> real(8) :: uu,vv,dir(3)
> 
> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t
> 
> I wonder what happened?
> 
> Please advice.
> 
> 
> --
> Thank you very much.
> 
> Yours sincerely,
> 
> 
> TAY Wee-Beng 郑伟明
> Research Scientist
> Experimental AeroScience Group
> Temasek Laboratories
> National University of Singapore
> T-Lab Building
> 5A, Engineering Drive 1, #02-02
> Singapore 117411
> Phone: +65 65167330
> E-mail: tslta...@nus.edu.sg
> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php
> Personal research webpage: http://tayweebeng.wixsite.com/website
> Youtube research showcase: 
> https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
> linkedin: www.linkedin.com/in/tay-weebeng
> 
> 
> 
> 
> 
> Important: This email is confidential and may be privileged. If you are not 
> the intended recipient, please delete it and notify us immediately; you 
> should not copy or use it for any purpose, nor disclose its contents to any 
> other person. Thank you.



[petsc-users] Compiling problem after upgrading to PETSc 3.8.3

2018-02-20 Thread TAY wee-beng
Sorry I sent an email from the wrong email address. Anyway, here's the 
email:


Hi,

I was previously using PETSc 3.7.6 on different clusters with both Intel 
Fortran and GNU Fortran. After upgrading, I met some problems when 
trying to compile:


On Intel Fortran:

Previously, I was using:

#include "petsc/finclude/petsc.h90"

in *.F90 when requires the use of PETSc

I read in the change log that h90 is no longer there and so I replaced 
with #include "petsc/finclude/petsc.h"


It worked. But I also have some *.F90 which do not use PETSc. However, 
they use some modules which uses PETSc.


Now I can't compile them. The error is :

math_routine.f90(3): error #7002: Error in opening the compiled module 
file.  Check INCLUDE paths.   [PETSC]

use mpi_subroutines

mpi_subroutines is a module which uses PETSc, and it compiled w/o problem.

The solution is that I have to compile e.g.  math_routine.F90 as if they 
use PETSc, by including PETSc include and lib files.


May I know why this is so? It was not necessary before.

Anyway, it managed to compile until it reached hypre.F90.

Previously, due to some bugs, I have to compile hypre with the -r8 
option. Also, I have to use:


integer(8) mpi_comm

mpi_comm = MPI_COMM_WORLD

to make my codes work with HYPRE.

But now, compiling gives the error:

hypre.F90(11): error #6401: The attributes of this name conflict with 
those made accessible by a USE statement.   [MPI_COMM]

integer(8) mpi_comm
--^
hypre.F90(84): error #6478: A type-name must not be used as a 
variable.   [MPI_COMM]

    mpi_comm = MPI_COMM_WORLD
^
hypre.F90(84): error #6303: The assignment operation or the binary 
expression operation is invalid for the data types of the two 
operands.   [1140850688]

    mpi_comm = MPI_COMM_WORLD
---^
hypre.F90(100): error #6478: A type-name must not be used as a 
variable.   [MPI_COMM]

    call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr)
...

What's actually happening? Why can't I compile now?

On GNU gfortran:

I tried to use similar tactics as above here. However, when compiling 
math_routine.F90, I got the error:


math_routine.F90:1333:21:

 call subb(orig,vert1,tvec)
 1
Error: Invalid procedure argument at (1)
math_routine.F90:1339:18:

 qvec = cross_pdt2(tvec,edge1)
  1
Error: Invalid procedure argument at (1)
math_routine.F90:1345:21:

 uu = dot_product(tvec,pvec)
 1
Error: ‘vector_a’ argument of ‘dot_product’ intrinsic at (1) must be 
numeric or LOGICAL

math_routine.F90:1371:21:

 uu = dot_product(tvec,pvec)

These errors were not present before. My variables are mostly vectors:

real(8), intent(in) :: 
orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3)


real(8) :: uu,vv,dir(3)

real(8) :: 
edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t


I wonder what happened?

Please advice.


--
Thank you very much.

Yours sincerely,


TAY Wee-Beng (Zheng Weiming) 郑伟明
Personal research webpage: http://tayweebeng.wixsite.com/website
Youtube research showcase: 
https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA
linkedin: www.linkedin.com/in/tay-weebeng