Re: [petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran Interface

2023-05-08 Thread Danyang Su
Thanks, Mark. Yes, it actually works when I update to 
ISLocalToGlobalMappingGetIndicesF90. I made a mistake reporting this does not 
work.

 

Danyang

 

From: Mark Adams 
Date: Monday, May 8, 2023 at 7:22 PM
To: Danyang Su 
Cc: petsc-users 
Subject: Re: [petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran 
Interface

 

 

 

On Mon, May 8, 2023 at 6:50 PM Danyang Su  wrote:

Dear PETSc-Users,

 

Is there any changes in ISLocalToGlobalMappingGetIndices function after PETSc 
3.17? 

 

In the previous PETSc version (<= 3.17), the function 
‘ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)’ works fine, even 
though the value of idltog looks out of bound (-11472655627481), 
https://www.mcs.anl.gov/petsc/petsc-3.14/src/ksp/ksp/tutorials/ex14f.F90.html. 
The value of idltog is not clear.

 

In the latest PETSc version,  this function can be called, but due to the 
extreme value of idltog, the code fails. I also tried to use 
‘ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr)’ () but no success.

 

* You do want the latter:

 

doc/changes/319.rst:- Deprecate ``ISLocalToGlobalMappingGetIndices()`` in favor 
of ``ISLocalToGlobalMappingGetIndicesF90()``

 

* You might look at a test:

 

src/ksp/ksp/tutorials/ex14f.F90:  
PetscCall(ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr))

 

* If you use 64 bit integers be careful. 

 

* You want to use a memory checker like Valgrind or Sanitize.

 

Mark

 

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

call DMDAGetGlobalIndicesF90(dmda_flow%da,PETSC_NULL_INTEGER,  &

 idx,ierr)

CHKERRQ(ierr)

#else

call DMGetLocalToGlobalMapping(dmda_flow%da,ltogm,ierr)

CHKERRQ(ierr)

call ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)

CHKERRQ(ierr)

#endif



dof = dmda_flow%dof

  

do ivol = 1, nngl

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

node_idx_lg2pg(ivol) = (idx(ivol*dof)+1)/dof

#else

node_idx_lg2pg(ivol) = (ltog(ivol*dof + idltog)+1)/dof

#endif

   end do

 

Any suggestions on that. 

 

Thanks,

 

Danyang



[petsc-users] Question on ISLocalToGlobalMappingGetIndices Fortran Interface

2023-05-08 Thread Danyang Su
Dear PETSc-Users,

 

Is there any changes in ISLocalToGlobalMappingGetIndices function after PETSc 
3.17? 

 

In the previous PETSc version (<= 3.17), the function 
‘ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)’ works fine, even 
though the value of idltog looks out of bound (-11472655627481), 
https://www.mcs.anl.gov/petsc/petsc-3.14/src/ksp/ksp/tutorials/ex14f.F90.html. 
The value of idltog is not clear.

 

In the latest PETSc version,  this function can be called, but due to the 
extreme value of idltog, the code fails. I also tried to use 
‘ISLocalToGlobalMappingGetIndicesF90(ltogm,ltog,ierr)’ () but no success. 

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

    call DMDAGetGlobalIndicesF90(dmda_flow%da,PETSC_NULL_INTEGER,  &

 idx,ierr)

    CHKERRQ(ierr)

#else

    call DMGetLocalToGlobalMapping(dmda_flow%da,ltogm,ierr)

    CHKERRQ(ierr)

    call ISLocalToGlobalMappingGetIndices(ltogm,ltog,idltog,ierr)

    CHKERRQ(ierr)

#endif

    

dof = dmda_flow%dof

  

do ivol = 1, nngl

 

#if (PETSC_VERSION_MAJOR == 3 && PETSC_VERSION_MINOR <= 4)

    node_idx_lg2pg(ivol) = (idx(ivol*dof)+1)/dof

#else

    node_idx_lg2pg(ivol) = (ltog(ivol*dof + idltog)+1)/dof

#endif

    end do

 

Any suggestions on that. 

 

Thanks,

 

Danyang



Re: [petsc-users] Fortran preprocessor not work in pets-dev

2023-05-08 Thread Danyang Su
Hi Satish,

Exactly. Something went wrong when I pull the remote content last week. I made 
a clean download of dev version and the problem is solved.

Thanks,

Danyang

On 2023-05-07, 7:14 AM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote:


Perhaps you are not using the latest 'main' (or release) branch?


I get (with current main):


$ mpiexec -n 4 ./petsc_fppflags
compiled by STANDARD_FORTRAN compiler
called by rank 0
called by rank 1
called by rank 2
called by rank 3


There was a issue with early petsc-3.19 release - here one had to reorder the 
lines from:


FPPFLAGS =
include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules


to


include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules
FPPFLAGS =


But this is fixed in latest release and main branches.


Satish


On Sun, 7 May 2023, Danyang Su wrote:


> Hi Satish,
> 
> Sorry, this is a typo when copy to the email. I use FPPFLAGS in the makefile. 
> Not sure why this occurs.
> 
> Actually not only the preprocessor fails, the petsc initialize does not work 
> either. Attached is a very simple fortran code and below is the test results. 
> Looks like the petsc is not properly installed. I am working on macOS 
> Monterey version 12.5 (Intel Xeon W processor).
> 
> Compiled using petsc-3.18
> (base) ➜ petsc-dev-fppflags mpiexec -n 4 ./petsc_fppflags
> compiled by STANDARD_FORTRAN compiler
> called by rank 0
> called by rank 1
> called by rank 2
> called by rank 3
> 
> compiled using petsc-dev
> (base) ➜ petsc-dev-fppflags mpiexec -n 4 ./petsc_fppflags
> called by rank 2
> called by rank 2
> called by rank 2
> called by rank 2
> 
> Thanks,
> 
> Danyang
> 
> On 2023-05-06, 10:22 PM, "Satish Balay"  <mailto:ba...@mcs.anl.gov> <mailto:ba...@mcs.anl.gov 
> <mailto:ba...@mcs.anl.gov>>> wrote:
> 
> 
> On Sat, 6 May 2023, Danyang Su wrote:
> 
> 
> > Hi All,
> > 
> > 
> > 
> > My code has some FPP. It works fine in PETSc 3.18 and earlier version, but 
> > stops working in the latest PETSc-Dev. For example the following FPP 
> > STANDARD_FORTRAN is not recognized. 
> > 
> > 
> > 
> > #ifdef STANDARD_FORTRAN
> > 
> > 1 format(15x,1000a15)
> > 
> > 2 format(1pe15.6e3,1000(1pe15.6e3))
> > 
> > #else
> > 
> > 1 format(15x,a15) 
> > 
> > 2 format(1pe15.6e3,(1pe15.6e3))
> > 
> > #endif
> > 
> > 
> > 
> > In the makefile, I define the preprocessor as PPFLAGS.
> > 
> > 
> > 
> > PPFLAGS := -DLINUX -DRELEASE -DRELEASE_X64 -DSTANDARD_FORTRAN
> 
> 
> Shouldn't this be FPPFLAGS?
> 
> 
> 
> 
> Can you send us a simple test case [with the makefile] that we can try to 
> demonstrate this problem?
> 
> 
> Satish
> 
> 
> > 
> > …
> > 
> > exe: $(OBJS) chkopts
> > 
> > -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS) -o $(EXENAME) $(OBJS) 
> > ${PETSC_LIB} ${LIS_LIB} ${DLIB} ${SLIB}
> > 
> > 
> > 
> > Any idea on this problem?
> > 
> > 
> > 
> > All the best,
> > 
> > 
> > 
> > 
> > 
> > 
> 
> 
> 
> 






Re: [petsc-users] Fortran preprocessor not work in pets-dev

2023-05-07 Thread Danyang Su
Hi Satish,

Sorry, this is a typo when copy to the email. I use FPPFLAGS in the makefile. 
Not sure why this occurs.

Actually not only the preprocessor fails, the petsc initialize does not work 
either. Attached is a very simple fortran code and below is the test results. 
Looks like the petsc is not properly installed. I am working on macOS Monterey 
version 12.5 (Intel Xeon W processor).

Compiled using petsc-3.18
(base) ➜  petsc-dev-fppflags mpiexec -n 4 ./petsc_fppflags
 compiled by STANDARD_FORTRAN compiler
 called by rank0
 called by rank1
 called by rank2
 called by rank3

compiled using petsc-dev
(base) ➜  petsc-dev-fppflags mpiexec -n 4 ./petsc_fppflags
 called by rank2
 called by rank2
 called by rank2
 called by rank2

Thanks,

Danyang

On 2023-05-06, 10:22 PM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote:


On Sat, 6 May 2023, Danyang Su wrote:


> Hi All,
> 
> 
> 
> My code has some FPP. It works fine in PETSc 3.18 and earlier version, but 
> stops working in the latest PETSc-Dev. For example the following FPP 
> STANDARD_FORTRAN is not recognized. 
> 
> 
> 
> #ifdef STANDARD_FORTRAN
> 
> 1 format(15x,1000a15)
> 
> 2 format(1pe15.6e3,1000(1pe15.6e3))
> 
> #else
> 
> 1 format(15x,a15) 
> 
> 2 format(1pe15.6e3,(1pe15.6e3))
> 
> #endif
> 
> 
> 
> In the makefile, I define the preprocessor as PPFLAGS.
> 
> 
> 
> PPFLAGS := -DLINUX -DRELEASE -DRELEASE_X64 -DSTANDARD_FORTRAN


Shouldn't this be FPPFLAGS?




Can you send us a simple test case [with the makefile] that we can try to 
demonstrate this problem?


Satish


> 
> …
> 
> exe: $(OBJS) chkopts
> 
> -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS) -o $(EXENAME) $(OBJS) 
> ${PETSC_LIB} ${LIS_LIB} ${DLIB} ${SLIB}
> 
> 
> 
> Any idea on this problem?
> 
> 
> 
> All the best,
> 
> 
> 
> 
> 
> 





driver_pc.F90
Description: Binary data


makefile
Description: Binary data


petsc_mpi_common.F90
Description: Binary data


Re: [petsc-users] PETSC ERROR in DMGetLocalBoundingBox?

2023-04-30 Thread Danyang Su
Hi Matt,

 

Just let you know that the error problem in DMGetLocalBoundingBox seems fixed 
in the latest dev version. I didn’t catch the error information any more.

 

Regards,

 

Danyang

 

From: 
Date: Friday, March 17, 2023 at 11:02 AM
To: 'Matthew Knepley' 
Cc: 
Subject: RE: [petsc-users] PETSC ERROR in DMGetLocalBoundingBox?

 

Hi Matt,

 

I am following up to check if you can reproduce the problem on your side. 

 

Thanks and have a great weekend,

 

Danyang

 

From: Danyang Su  
Sent: March 4, 2023 4:38 PM
To: Matthew Knepley 
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PETSC ERROR in DMGetLocalBoundingBox?

 

Hi Matt,

 

Attached is the source code and example. I have deleted most of the unused 
source code but it is still a bit length. Sorry about that. The errors come 
after DMGetLocalBoundingBox and DMGetBoundingBox.

 

-> To compile the code

Please type 'make exe' and the executable file petsc_bounding will be created 
under the same folder.

 

 

-> To test the code

Please go to fold 'test' and type 'mpiexec -n 1 ../petsc_bounding'.

 

 

-> The output from PETSc 3.18, error information

input file: stedvs.dat

 



global control parameters



 

[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind

[0]PETSC ERROR: Object already free: Parameter # 1

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.18.3, Dec 28, 2022 

[0]PETSC ERROR: ../petsc_bounding on a linux-gnu-dbg named starblazer by dsu 
Sat Mar  4 16:20:51 2023

[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --download-mpich --download-scalapack --download-parmetis 
--download-metis --download-mumps --download-ptscotch --download-chaco 
--download-fblaslapack --download-hypre --download-superlu_dist 
--download-hdf5=yes --download-ctetgen --download-zlib --download-pnetcdf 
--download-cmake --with-hdf5-fortran-bindings --with-debugging=1

[0]PETSC ERROR: #1 VecGetArrayRead() at 
/home/dsu/Soft/petsc/petsc-3.18.3/src/vec/vec/interface/rvector.c:1928

[0]PETSC ERROR: #2 DMGetLocalBoundingBox() at 
/home/dsu/Soft/petsc/petsc-3.18.3/src/dm/interface/dmcoordinates.c:897

[0]PETSC ERROR: #3 
/home/dsu/Work/bug-check/petsc_bounding/src/solver_ddmethod.F90:1920

Total volume of simulation domain   0.2000E+01

Total volume of simulation domain   0.2000E+01

 

 

-> The output from PETSc 3.17 and earlier, no error

input file: stedvs.dat

 



global control parameters



 

Total volume of simulation domain   0.2000E+01

Total volume of simulation domain   0.2000E+01

 

 

Thanks,

 

Danyang

From: Matthew Knepley 
Date: Friday, March 3, 2023 at 8:58 PM
To: 
Cc: 
Subject: Re: [petsc-users] PETSC ERROR in DMGetLocalBoundingBox?

 

On Sat, Mar 4, 2023 at 1:35 AM  wrote:

Hi All,

 

I get a very strange error after upgrading PETSc version to 3.18.3, indicating 
some object is already free. The error is begin and does not crash the code. 
There is no error before PETSc 3.17.5 versions.

 

We have changed the way coordinates are handled in order to support higher 
order coordinate fields. Is it possible

to send something that we can run that has this error? It could be on our end, 
but it could also be that you are

destroying a coordinate vector accidentally.

 

  Thanks,

 

 Matt

 

 

!Check coordinates

call DMGetCoordinateDM(dmda_flow%da,cda,ierr)

CHKERRQ(ierr)

call DMGetCoordinates(dmda_flow%da,gc,ierr)

CHKERRQ(ierr)

call DMGetLocalBoundingBox(dmda_flow%da,lmin,lmax,ierr)

CHKERRQ(ierr)

call DMGetBoundingBox(dmda_flow%da,gmin,gmax,ierr)

CHKERRQ(ierr)

 

 

[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind

[0]PETSC ERROR: Object already free: Parameter # 1

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.18.3, Dec 28, 2022

[0]PETSC ERROR: ../min3p-hpc-mpi-petsc-3.18.3 on a linux-gnu-dbg named 
starblazer by dsu Fri Mar  3 16:26:03 2023

[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --download-mpich --download-scalapack --download-parmetis 
--download-metis --download-mumps --download-ptscotch --download-chaco 
--download-fblaslapack --download-hypre --download-superlu_dist 
--download-hdf5=yes --download-ctetgen --download-zlib --down

Re: [petsc-users] PETSC ERROR in DMGetLocalBoundingBox?

2023-03-04 Thread Danyang Su
Hi Matt,

 

Attached is the source code and example. I have deleted most of the unused 
source code but it is still a bit length. Sorry about that. The errors come 
after DMGetLocalBoundingBox and DMGetBoundingBox.

 

-> To compile the code

Please type 'make exe' and the executable file petsc_bounding will be created 
under the same folder.

 

 

-> To test the code

Please go to fold 'test' and type 'mpiexec -n 1 ../petsc_bounding'.

 

 

-> The output from PETSc 3.18, error information

input file: stedvs.dat

 



global control parameters



 

[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind

[0]PETSC ERROR: Object already free: Parameter # 1

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.18.3, Dec 28, 2022 

[0]PETSC ERROR: ../petsc_bounding on a linux-gnu-dbg named starblazer by dsu 
Sat Mar  4 16:20:51 2023

[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --download-mpich --download-scalapack --download-parmetis 
--download-metis --download-mumps --download-ptscotch --download-chaco 
--download-fblaslapack --download-hypre --download-superlu_dist 
--download-hdf5=yes --download-ctetgen --download-zlib --download-pnetcdf 
--download-cmake --with-hdf5-fortran-bindings --with-debugging=1

[0]PETSC ERROR: #1 VecGetArrayRead() at 
/home/dsu/Soft/petsc/petsc-3.18.3/src/vec/vec/interface/rvector.c:1928

[0]PETSC ERROR: #2 DMGetLocalBoundingBox() at 
/home/dsu/Soft/petsc/petsc-3.18.3/src/dm/interface/dmcoordinates.c:897

[0]PETSC ERROR: #3 
/home/dsu/Work/bug-check/petsc_bounding/src/solver_ddmethod.F90:1920

Total volume of simulation domain   0.2000E+01

Total volume of simulation domain   0.2000E+01

 

 

-> The output from PETSc 3.17 and earlier, no error

input file: stedvs.dat

 



global control parameters



 

Total volume of simulation domain   0.2000E+01

Total volume of simulation domain   0.2000E+01

 

 

Thanks,

 

Danyang

From: Matthew Knepley 
Date: Friday, March 3, 2023 at 8:58 PM
To: 
Cc: 
Subject: Re: [petsc-users] PETSC ERROR in DMGetLocalBoundingBox?

 

On Sat, Mar 4, 2023 at 1:35 AM  wrote:

Hi All,

 

I get a very strange error after upgrading PETSc version to 3.18.3, indicating 
some object is already free. The error is begin and does not crash the code. 
There is no error before PETSc 3.17.5 versions.

 

We have changed the way coordinates are handled in order to support higher 
order coordinate fields. Is it possible

to send something that we can run that has this error? It could be on our end, 
but it could also be that you are

destroying a coordinate vector accidentally.

 

  Thanks,

 

 Matt

 

 

!Check coordinates

call DMGetCoordinateDM(dmda_flow%da,cda,ierr)

CHKERRQ(ierr)

call DMGetCoordinates(dmda_flow%da,gc,ierr)

CHKERRQ(ierr)

call DMGetLocalBoundingBox(dmda_flow%da,lmin,lmax,ierr)

CHKERRQ(ierr)

call DMGetBoundingBox(dmda_flow%da,gmin,gmax,ierr)

CHKERRQ(ierr)

 

 

[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Corrupt argument: https://petsc.org/release/faq/#valgrind

[0]PETSC ERROR: Object already free: Parameter # 1

[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.18.3, Dec 28, 2022

[0]PETSC ERROR: ../min3p-hpc-mpi-petsc-3.18.3 on a linux-gnu-dbg named 
starblazer by dsu Fri Mar  3 16:26:03 2023

[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ 
--with-fc=gfortran --download-mpich --download-scalapack --download-parmetis 
--download-metis --download-mumps --download-ptscotch --download-chaco 
--download-fblaslapack --download-hypre --download-superlu_dist 
--download-hdf5=yes --download-ctetgen --download-zlib --download-pnetcdf 
--download-cmake --with-hdf5-fortran-bindings --with-debugging=1

[0]PETSC ERROR: #1 VecGetArrayRead() at 
/home/dsu/Soft/petsc/petsc-3.18.3/src/vec/vec/interface/rvector.c:1928

[0]PETSC ERROR: #2 DMGetLocalBoundingBox() at 
/home/dsu/Soft/petsc/petsc-3.18.3/src/dm/interface/dmcoordinates.c:897

[0]PETSC ERROR: #3 
/home/dsu/Work/min3p-dbs-backup/src/project/makefile_p/../../solver/solver_ddmethod.F90:2140

 

Any suggestion on this?

 

Thanks,

 

Danyang


 

-- 

What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert 

Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Danyang Su
Hi Satish,

For some unknown reason during Cmake 3.18.5 installation, I get error "Cannot 
find a C++ compiler that supports both C++11 and the specified C++ flags.". The 
system installed Cmake 3.2.3 is way too old. 

I will just leave it as is since superlu_dist is optional in my model. 

Thanks for your suggestions to make it work,

Danyang

On 2023-01-19, 4:52 PM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote:


Looks like .bashrc is getting sourced again during the build process [as make 
creates new bash shell during the build] - thus overriding the env variable 
that's set.


Glad you have a working build now. Thanks for the update!


BTW: superlu-dist requires cmake 3.18.1 or higher. You could check if this 
older version of cmake builds on this cluster [if you want to give superlu-dist 
a try again]


Satish




On Thu, 19 Jan 2023, Danyang Su wrote:


> Hi Satish,
> 
> That's a bit strange since I have already use export
> PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.18.3.
> 
> Yes, I have petsc 3.13.6 installed and has PETSC_DIR set in the bashrc file.
> After changing PETSC_DIR in the bashrc file, PETSc can be compiled now.
> 
> Thanks,
> 
> Danyang
> 
> On 2023-01-19 3:58 p.m., Satish Balay wrote:
> >> /home/danyangs/soft/petsc/petsc-3.13.6/src/sys/makefile contains a
> >> directory not on the filesystem: ['\\']
> >
> > Its strange that its complaining about petsc-3.13.6. Do you have this
> > location set in your .bashrc or similar file - that's getting sourced during
> > the build?
> >
> > Perhaps you could start with a fresh copy of petsc and retry?
> >
> > Also suggest using 'arch-' prefix for PETSC_ARCH i.e
> > 'arch-intel-14.0.2-openmpi-1.6.5' - just in case there are some bugs lurking
> > with skipping build files in this location
> >
> > Satish
> >
> >
> > On Thu, 19 Jan 2023, Danyang Su wrote:
> >
> >> Hi Barry and Satish,
> >>
> >> I guess there is compatibility problem with some external package. The
> >> latest
> >> CMake complains about the compiler, so I remove superlu_dist option since I
> >> rarely use it. Then the HYPRE package shows "Error: Hypre requires C++
> >> compiler. None specified", which is a bit tricky since c++ compiler is
> >> specified in the configuration so I comment the related error code in
> >> hypre.py
> >> during configuration. After doing this, there is no error during PETSc
> >> configuration but new error occurs during make process.
> >>
> >> **ERROR*
> >> Error during compile, check
> >> intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/make.log
> >> Send it and intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/configure.log to
> >> petsc-ma...@mcs.anl.gov <mailto:petsc-ma...@mcs.anl.gov>
> >> 
> >>
> >> It might be not worth checking this problem since most of the users do not
> >> work on such old cluster. Both log files are attached in case any developer
> >> wants to check. Please let me know if there is any suggestions and I am
> >> willing to make a test.
> >>
> >> Thanks,
> >>
> >> Danyang
> >>
> >> On 2023-01-19 11:18 a.m., Satish Balay wrote:
> >>> BTW: cmake is required by superlu-dist not petsc.
> >>>
> >>> And its possible that petsc might not build with this old version of
> >>> openmpi
> >>> - [and/or the externalpackages that you are installing - might not build
> >>> with this old version of intel compilers].
> >>>
> >>> Satish
> >>>
> >>> On Thu, 19 Jan 2023, Barry Smith wrote:
> >>>
> >>>> Remove
> >>>> --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
> >>>> and install CMake yourself. Then configure PETSc with
> >>>> --with-cmake=directory you installed it in.
> >>>>
> >>>> Barry
> >>>>
> >>>>
> >>>>> On Jan 19, 2023, at 1:46 PM, Danyang Su  >>>>> <mailto:danyang...@gmail.com>> wrote:
> >>>>>
> >>>>> Hi All,
> >>>>>
> >>>>> I am trying to install the latest PETSc on an old cluster but always get
> >>>>> some error information at the step of cmake. The system installed cmake
> >>>>> is
> >>>>> V3.2.3, which is out-of-da

Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Danyang Su

Hi Satish,

That's a bit strange since I have already use export 
PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.18.3.


Yes, I have petsc 3.13.6 installed and has PETSC_DIR set in the bashrc 
file. After changing PETSC_DIR in the bashrc file, PETSc can be compiled 
now.


Thanks,

Danyang

On 2023-01-19 3:58 p.m., Satish Balay wrote:

/home/danyangs/soft/petsc/petsc-3.13.6/src/sys/makefile contains a directory 
not on the filesystem: ['\\']


Its strange that its complaining about petsc-3.13.6. Do you have this location 
set in your .bashrc or similar file - that's getting sourced during the build?

Perhaps you could start with a fresh copy of petsc and retry?

Also suggest using 'arch-' prefix for PETSC_ARCH i.e 
'arch-intel-14.0.2-openmpi-1.6.5' - just in case there are some bugs lurking 
with skipping build files in this location

Satish


On Thu, 19 Jan 2023, Danyang Su wrote:


Hi Barry and Satish,

I guess there is compatibility problem with some external package. The latest
CMake complains about the compiler, so I remove superlu_dist option since I
rarely use it. Then the HYPRE package shows "Error: Hypre requires C++
compiler. None specified", which is a bit tricky since c++ compiler is
specified in the configuration so I comment the related error code in hypre.py
during configuration. After doing this, there is no error during PETSc
configuration but new error occurs during make process.

**ERROR*
   Error during compile, check
intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/make.log
   Send it and intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/configure.log to
petsc-ma...@mcs.anl.gov


It might be not worth checking this problem since most of the users do not
work on such old cluster. Both log files are attached in case any developer
wants to check. Please let me know if there is any suggestions and I am
willing to make a test.

Thanks,

Danyang

On 2023-01-19 11:18 a.m., Satish Balay wrote:

BTW: cmake is required by superlu-dist not petsc.

And its possible that petsc might not build with this old version of openmpi
- [and/or the externalpackages that you are installing - might not build
with this old version of intel compilers].

Satish

On Thu, 19 Jan 2023, Barry Smith wrote:


Remove

--download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
and install CMake yourself. Then configure PETSc with
--with-cmake=directory you installed it in.

Barry



On Jan 19, 2023, at 1:46 PM, Danyang Su  wrote:

Hi All,

I am trying to install the latest PETSc on an old cluster but always get
some error information at the step of cmake. The system installed cmake is
V3.2.3, which is out-of-date for PETSc. I tried to use --download-cmake
first, it does not work. Then I tried to clean everything (delete the
petsc_arch folder), download the latest cmake myself and pass the path to
the configuration, the error is still there.

The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have no
problem to install PETSc-3.13.6 there. The latest version cannot pass
configuration, unfortunately. Attached is the last configuration I have
tried.

--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
--download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
--download-mumps --download-scalapack --download-parmetis --download-metis
--download-ptscotch --download-fblaslapack --download-hypre
--download-superlu_dist --download-hdf5=yes --with-hdf5-fortran-bindings
--with-debugging=0 COPTFLAGS="-O2 -march=native -mtune=native"
CXXOPTFLAGS="-O2 -march=native -mtune=native" FOPTFLAGS="-O2 -march=native
-mtune=native"

Is there any solution for this.

Thanks,

Danyang





Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Danyang Su
Hi All,

Problem is resolved by Homebrew Gfortran. I use GNU Fortran (GCC) 8.2.0 before. 
Conda does not cause the problem. 

Thanks,

Danyang

On 2023-01-06, 2:24 PM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote:


Likely your installed gfortran is incompatible with hdf5


>>>>
Executing: gfortran --version
stdout:
GNU Fortran (GCC) 8.2.0
<<<<


We generally use brew gfortran - and that works with hdf5 aswell


balay@ypro ~ % gfortran --version
GNU Fortran (Homebrew GCC 11.2.0_1) 11.2.0


Satish


On Fri, 6 Jan 2023, Danyang Su wrote:


> Hi Pierre,
> 
> 
> 
> I have tried to exclude Conda related environment variables but it does not 
> work. Instead, if I include ‘--download-hdf5=yes’ but exclude 
> ‘--with-hdf5-fortran-bindings’ in the configuration, PETSc can be configured 
> and installed without problem, even with Conda related environment activated. 
> However, since my code requires fortran interface to HDF5, I do need 
> ‘--with-hdf5-fortran-bindings’, otherwise, my code cannot be compiled.
> 
> 
> 
> Any other suggestions?
> 
> 
> 
> Thanks,
> 
> 
> 
> Danyang
> 
> 
> 
> From: Pierre Jolivet mailto:pie...@joliv.et>>
> Date: Friday, January 6, 2023 at 7:59 AM
> To: Danyang Su mailto:danyang...@gmail.com>>
> Cc: mailto:petsc-users@mcs.anl.gov>>
> Subject: Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3
> 
> 
> 
> 
> 
> 
> 
> On 6 Jan 2023, at 4:49 PM, Danyang Su  <mailto:danyang...@gmail.com>> wrote:
> 
> 
> 
> Hi All,
> 
> 
> 
> I get ‘Error running configure on HDF5’ in PETSc-3.18.3 on MacOS, but no 
> problem on Ubuntu. Attached is the configuration log file. 
> 
> 
> 
> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps 
> --download-scalapack --download-parmetis --download-metis --download-ptscotch 
> --download-fblaslapack --download-mpich --download-hypre 
> --download-superlu_dist --download-hdf5=yes --with-debugging=0 
> --download-cmake --with-hdf5-fortran-bindings
> 
> 
> 
> Any idea on this?
> 
> 
> 
> Could you try to reconfigure in a shell without conda being activated?
> 
> You have 
> PATH=/Users/danyangsu/Soft/Anaconda3/bin:/Users/danyangsu/Soft/Anaconda3/condabin:[…]
>  which typically results in a broken configuration.
> 
> 
> 
> Thanks,
> 
> Pierre
> 
> 
> 
> Thanks,
> 
> 
> 
> Danyang
> 
> 
> 
> 
> 






Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Danyang Su
Hi Pierre,

 

I have tried to exclude Conda related environment variables but it does not 
work. Instead, if I include ‘--download-hdf5=yes’ but exclude 
‘--with-hdf5-fortran-bindings’ in the configuration, PETSc can be configured 
and installed without problem, even with Conda related environment activated. 
However, since my code requires fortran interface to HDF5, I do need 
‘--with-hdf5-fortran-bindings’, otherwise, my code cannot be compiled.

 

Any other suggestions?

 

Thanks,

 

Danyang

 

From: Pierre Jolivet 
Date: Friday, January 6, 2023 at 7:59 AM
To: Danyang Su 
Cc: 
Subject: Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

 

 



On 6 Jan 2023, at 4:49 PM, Danyang Su  wrote:

 

Hi All,

 

I get ‘Error running configure on HDF5’ in PETSc-3.18.3 on MacOS, but no 
problem on Ubuntu. Attached is the configuration log file. 

 

./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps 
--download-scalapack --download-parmetis --download-metis --download-ptscotch 
--download-fblaslapack --download-mpich --download-hypre 
--download-superlu_dist --download-hdf5=yes --with-debugging=0 --download-cmake 
--with-hdf5-fortran-bindings

 

Any idea on this?

 

Could you try to reconfigure in a shell without conda being activated?

You have 
PATH=/Users/danyangsu/Soft/Anaconda3/bin:/Users/danyangsu/Soft/Anaconda3/condabin:[…]
 which typically results in a broken configuration.

 

Thanks,

Pierre



Thanks,

 

Danyang






Re: [petsc-users] Fortran HDF5 Cannot be found in PETSc-3.16

2022-02-28 Thread Danyang Su

Thanks, Barry. It works now.

Danyang

On 2022-02-28 9:59 a.m., Barry Smith wrote:

   You need the additional configure option --download-hdf5-fortran-bindings  
Please make sure you have the latest 3.16.4

   Barry



On Feb 28, 2022, at 12:42 PM, Danyang Su  wrote:

Hi All,

Does anyone encounter the problem when HDF5 related fortran code cannot be 
compiled in PETSc-3.16 because the 'use hdf5' cannot find the required file?

Compared to HDF5-1.12.0 and earlier versions, some object files (e.g., 
hdf5.mod, hdf5.o) are missing in HDF5-1.12.1 in PETSc-3.16. I checked the 
makefile in hdf5 folder in externalpackage, there are some difference which I 
guess might cause the problem.

In PETSc-3.15, HDF5-1.12.0

# Make sure that these variables are exported to the Makefiles
F9XMODEXT = mod
F9XMODFLAG = -I
F9XSUFFIXFLAG =

In PETSc-3.16, HDF5-1.12.1

# Make sure that these variables are exported to the Makefiles
F9XMODEXT =
F9XMODFLAG =
F9XSUFFIXFLAG

The configuration I use in PETSc-3.16 is the same as PETSc-3.15.

./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps --download-scalapack 
--download-parmetis --download-metis --download-ptscotch --download-fblaslapack --download-mpich 
--download-hypre --download-superlu_dist --download-hdf5=yes --download-cmake --with-debugging=0 
COPTFLAGS="-O2 -march=native -mtune=native" CXXOPTFLAGS="-O2 -march=native -mtune=native" 
FOPTFLAGS="-O2 -march=native -mtune=native"

Is this a bug or something wrong in my PETSc configuration?

Thanks,

Danyang



[petsc-users] Fortran HDF5 Cannot be found in PETSc-3.16

2022-02-28 Thread Danyang Su

Hi All,

Does anyone encounter the problem when HDF5 related fortran code cannot 
be compiled in PETSc-3.16 because the 'use hdf5' cannot find the 
required file?


Compared to HDF5-1.12.0 and earlier versions, some object files (e.g., 
hdf5.mod, hdf5.o) are missing in HDF5-1.12.1 in PETSc-3.16. I checked 
the makefile in hdf5 folder in externalpackage, there are some 
difference which I guess might cause the problem.


In PETSc-3.15, HDF5-1.12.0

# Make sure that these variables are exported to the Makefiles
F9XMODEXT = mod
F9XMODFLAG = -I
F9XSUFFIXFLAG =

In PETSc-3.16, HDF5-1.12.1

# Make sure that these variables are exported to the Makefiles
F9XMODEXT =
F9XMODFLAG =
F9XSUFFIXFLAG

The configuration I use in PETSc-3.16 is the same as PETSc-3.15.

./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran 
--download-mumps --download-scalapack --download-parmetis 
--download-metis --download-ptscotch --download-fblaslapack 
--download-mpich --download-hypre --download-superlu_dist 
--download-hdf5=yes --download-cmake --with-debugging=0 COPTFLAGS="-O2 
-march=native -mtune=native" CXXOPTFLAGS="-O2 -march=native 
-mtune=native" FOPTFLAGS="-O2 -march=native -mtune=native"


Is this a bug or something wrong in my PETSc configuration?

Thanks,

Danyang



Re: [petsc-users] PETSc configuration error on macOS Monterey with Intel oneAPI

2022-01-13 Thread Danyang Su
Hi Samar,

 

Yes, with mpich, there is no such error. I will just use this configuration for 
now. 

 

Thanks,

 

Danyang

 

From: Samar Khatiwala 
Date: Thursday, January 13, 2022 at 1:16 AM
To: Danyang Su 
Cc: PETSc 
Subject: Re: [petsc-users] PETSc configuration error on macOS Monterey with 
Intel oneAPI

 

Hi Danyang,

 

Just to reiterate, the presence of -Wl,-flat_namespace *is* the problem. I got 
rid of it by configuring mpich with --enable-two-level-namespace. I reported 
this problem to the PETSc 

folks a few weeks ago and they were going to patch MPICH.py (under 
config/BuildSystem/config/packages) to pass this flag. So you could try 
configuring with —download-mpich 

(or build your own mpich, which is pretty straightforward). If you’re wedded to 
openmpi, you could patch up OpenMPI.py yourself (maybe 
--enable-two-level-namespace is called 

something else for openmpi).

 

Best,

 

Samar



On Jan 13, 2022, at 6:01 AM, Danyang Su  wrote:

 

Hi Samar,

 

Thanks for your suggestion. Unfortunately, it does not work. I checked the 
mpif90 wrapper and the option "-Wl,-flat_namespace” is present. 

 

(base) ➜  bin ./mpif90 -show

ifort -I/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/include 
-Wl,-flat_namespace -Wl,-commons,use_dylibs 
-I/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/lib 
-L/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/lib -lmpi_usempif08 
-lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi

 

Thanks anyway,

 

Danyang

From: Samar Khatiwala 
Date: Wednesday, January 12, 2022 at 2:01 PM
To: Danyang Su 
Cc: PETSc 
Subject: Re: [petsc-users] PETSc configuration error on macOS Monterey with 
Intel oneAPI

 

Hi Danyang,

 

I had trouble configuring PETSc on MacOS Monterey with ifort when using mpich 
(which I was building myself). I tracked it down to an errant 
"-Wl,-flat_namespace” 

option in the mpif90 wrapper. I rebuilt mpich with the 
"--enable-two-level-namespace” configuration option and the problem went away. 
I don’t know if there’s a similar 

issue with openmpi but you could check the corresponding mpif90 wrapper (mpif90 
-show) whether "-Wl,-flat_namespace” is present or not. If so, perhaps passing 

"--enable-two-level-namespace” to PETSc configure might fix the problem 
(although I don’t know how you would set this flag *just* for building openmpi).

 

Samar




On Jan 12, 2022, at 9:41 PM, Danyang Su  wrote:

 

Hi All,

I got an error in PETSc configuration on macOS Monterey with Intel oneAPI using 
the following options:

 

./configure --with-cc=icc --with-cxx=icpc --with-fc=ifort 
--with-blas-lapack-dir=/opt/intel/oneapi/mkl/2022.0.0/lib/ --with-debugging=1 
PETSC_ARCH=macos-intel-dbg --download-mumps --download-parmetis 
--download-metis --download-hypre --download-superlu --download-hdf5=yes 
--download-openmpi

 

Error with downloaded OpenMPI: Cannot compile/link FC with 
/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/bin/mpif90.

 

Any suggestions for that?

 

There is no problem if I use GNU compiler and MPICH.

 

Thanks,

 

Danyang

 



Re: [petsc-users] PETSc configuration error on macOS Monterey with Intel oneAPI

2022-01-12 Thread Danyang Su
Hi Samar,

 

Thanks for your suggestion. Unfortunately, it does not work. I checked the 
mpif90 wrapper and the option "-Wl,-flat_namespace” is present. 

 

(base) ➜  bin ./mpif90 -show

ifort -I/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/include 
-Wl,-flat_namespace -Wl,-commons,use_dylibs 
-I/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/lib 
-L/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/lib -lmpi_usempif08 
-lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi

 

Thanks anyway,

 

Danyang

From: Samar Khatiwala 
Date: Wednesday, January 12, 2022 at 2:01 PM
To: Danyang Su 
Cc: PETSc 
Subject: Re: [petsc-users] PETSc configuration error on macOS Monterey with 
Intel oneAPI

 

Hi Danyang,

 

I had trouble configuring PETSc on MacOS Monterey with ifort when using mpich 
(which I was building myself). I tracked it down to an errant 
"-Wl,-flat_namespace” 

option in the mpif90 wrapper. I rebuilt mpich with the 
"--enable-two-level-namespace” configuration option and the problem went away. 
I don’t know if there’s a similar 

issue with openmpi but you could check the corresponding mpif90 wrapper (mpif90 
-show) whether "-Wl,-flat_namespace” is present or not. If so, perhaps passing 

"--enable-two-level-namespace” to PETSc configure might fix the problem 
(although I don’t know how you would set this flag *just* for building openmpi).

 

Samar



On Jan 12, 2022, at 9:41 PM, Danyang Su  wrote:

 

Hi All,

I got an error in PETSc configuration on macOS Monterey with Intel oneAPI using 
the following options:

 

./configure --with-cc=icc --with-cxx=icpc --with-fc=ifort 
--with-blas-lapack-dir=/opt/intel/oneapi/mkl/2022.0.0/lib/ --with-debugging=1 
PETSC_ARCH=macos-intel-dbg --download-mumps --download-parmetis 
--download-metis --download-hypre --download-superlu --download-hdf5=yes 
--download-openmpi

 

Error with downloaded OpenMPI: Cannot compile/link FC with 
/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/bin/mpif90.

 

Any suggestions for that?

 

There is no problem if I use GNU compiler and MPICH.

 

Thanks,

 

Danyang 

 



[petsc-users] PETSc configuration error on macOS Monterey with Intel oneAPI

2022-01-12 Thread Danyang Su

Hi All,

I got an error in PETSc configuration on macOS Monterey with Intel 
oneAPI using the following options:


./configure --with-cc=icc --with-cxx=icpc --with-fc=ifort 
--with-blas-lapack-dir=/opt/intel/oneapi/mkl/2022.0.0/lib/ 
--with-debugging=1 PETSC_ARCH=macos-intel-dbg --download-mumps 
--download-parmetis --download-metis --download-hypre --download-superlu 
--download-hdf5=yes --download-openmpi


Error with downloaded OpenMPI: Cannot compile/link FC with 
/Users/danyangsu/Soft/PETSc/petsc-3.16.3/macos-intel-dbg/bin/mpif90.


Any suggestions for that?

There is no problem if I use GNU compiler and MPICH.

Thanks,

Danyang


[petsc-users] Is old ex10.c (separated matrix and rhs) deprecated?

2022-01-10 Thread Danyang Su

Hi All,

Back to PETSc-3.8 version, the example ex10.c supports reading matrix 
and vector from separated files. Is this feature deprecated in the new 
PETSc version? I have some matrix and rhs to test but could not use ex10 
example under new PETSc version.


Thanks,

Danyang



Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-11 Thread Danyang Su
Hi Junchao,

 

I also ported the changes you have made to PETSc 3.13.6 and configured with 
Intel 14.0 and OpenMPI 1.6.5, it works too. 

There is a similar problem in PETSc 3.14+ version as MPI_Iallreduce is only 
available in OpenMPI V1.7+. I would not say this is a bug, it just requires a 
newer MPI version. 

 

/home/danyangs/soft/petsc/petsc-3.14.6/intel-14.0.2-openmpi-1.6.5/lib/libpetsc.so:
 undefined reference to `MPI_Iallreduce' 

 

Thanks again for all your help,

 

Danyang

From: Junchao Zhang 
Date: Sunday, April 11, 2021 at 7:54 AM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Thanks, Glad to know you have a workaround.
--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 10:06 PM Danyang Su  wrote:

Hi Junchao,

 

I cannot configure your branch with same options due to the error in sowing. I 
had similar error before on other clusters with very old openmpi version. 
Problem was solved when openmpi was updated to a newer one. 

 

At this moment, I configured a PETSc version with Openmpi 2.1.6 version and it 
seems working properly. 

 

Thanks and have a good rest of the weekend,

 

Danyang

 

From: Danyang Su 
Date: Saturday, April 10, 2021 at 4:08 PM
To: Junchao Zhang 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Hi Junchao,

 

The configuration is successful. The error comes from the last step when I run 

 

make PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.13.6 
PETSC_ARCH=linux-intel-openmpi check

 

***Error detected during compile or link!***

See http://www.mcs.anl.gov/petsc/documentation/faq.html

/home/danyangs/soft/petsc/petsc-3.13.6/src/snes/tutorials ex5f

*

mpif90 -fPIC -O3 -march=native -mtune=nativels   
-I/home/danyangs/soft/petsc/petsc-3.13.6/include 
-I/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/include  
ex5f.F90  
-Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 
-L/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 
-Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
 -L/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64 
-Wl,-rpath,/global/software/openmpi-1.6.5/intel/lib64 
-L/global/software/openmpi-1.6.5/intel/lib64 
-Wl,-rpath,/global/software/intel/composerxe/mkl/lib/intel64 
-L/global/software/intel/composerxe/mkl/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
-L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
-Wl,-rpath,/global/software/intel/composerxe/lib/intel64 -lpetsc -lHYPRE 
-lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu 
-lflapack -lfblas -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 
-lparmetis -lmetis -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm -lnuma -lrt 
-lnsl -lutil -limf -lifport -lifcore -lsvml -lipgo -lintlc -lpthread -lgcc_s 
-lirc_s -lstdc++ -ldl -o ex5f

ifort: command line warning #10159: invalid argument for option '-m'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_allocate'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_attach'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_create_dynamic'

gmake[4]: *** [ex5f] Error 1

 

Thanks,

 

Danyang

 

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 3:57 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

You sent a wrong one. This configure.log was from a successful configuration. 
Note FOPTFLAGS="-O3 -march=native -mtune=nativels" looks suspicious.

 

--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 5:32 PM Danyang Su  wrote:

 

Hi Junchao,

 

Thanks for looking into this problem. The configuration log is attached.

 

All the best,

 

Danyang

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 2:36 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Hi, Danyang, 

 

Send the configure.log.  Also, PETSc does not need MPI_Win_allocate etc to 
work. I will have a look.


--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 2:47 PM Danyang Su  wrote:

Hi Barry,

 

I tried this option 

Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-10 Thread Danyang Su
Hi Junchao,

 

I cannot configure your branch with same options due to the error in sowing. I 
had similar error before on other clusters with very old openmpi version. 
Problem was solved when openmpi was updated to a newer one. 

 

At this moment, I configured a PETSc version with Openmpi 2.1.6 version and it 
seems working properly. 

 

Thanks and have a good rest of the weekend,

 

Danyang

 

From: Danyang Su 
Date: Saturday, April 10, 2021 at 4:08 PM
To: Junchao Zhang 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Hi Junchao,

 

The configuration is successful. The error comes from the last step when I run 

 

make PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.13.6 
PETSC_ARCH=linux-intel-openmpi check

 

***Error detected during compile or link!***

See http://www.mcs.anl.gov/petsc/documentation/faq.html

/home/danyangs/soft/petsc/petsc-3.13.6/src/snes/tutorials ex5f

*

mpif90 -fPIC -O3 -march=native -mtune=nativels   
-I/home/danyangs/soft/petsc/petsc-3.13.6/include 
-I/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/include  
ex5f.F90  
-Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 
-L/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 
-Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
 -L/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64 
-Wl,-rpath,/global/software/openmpi-1.6.5/intel/lib64 
-L/global/software/openmpi-1.6.5/intel/lib64 
-Wl,-rpath,/global/software/intel/composerxe/mkl/lib/intel64 
-L/global/software/intel/composerxe/mkl/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
-L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
-Wl,-rpath,/global/software/intel/composerxe/lib/intel64 -lpetsc -lHYPRE 
-lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu 
-lflapack -lfblas -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 
-lparmetis -lmetis -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm -lnuma -lrt 
-lnsl -lutil -limf -lifport -lifcore -lsvml -lipgo -lintlc -lpthread -lgcc_s 
-lirc_s -lstdc++ -ldl -o ex5f

ifort: command line warning #10159: invalid argument for option '-m'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_allocate'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_attach'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_create_dynamic'

gmake[4]: *** [ex5f] Error 1

 

Thanks,

 

Danyang

 

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 3:57 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

You sent a wrong one. This configure.log was from a successful configuration. 
Note FOPTFLAGS="-O3 -march=native -mtune=nativels" looks suspicious.

 

--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 5:32 PM Danyang Su  wrote:

 

Hi Junchao,

 

Thanks for looking into this problem. The configuration log is attached.

 

All the best,

 

Danyang

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 2:36 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Hi, Danyang, 

 

Send the configure.log.  Also, PETSc does not need MPI_Win_allocate etc to 
work. I will have a look.


--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 2:47 PM Danyang Su  wrote:

Hi Barry,

 

I tried this option before but get ‘Error running configure on OpenMPI’

 

***

 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):

---

Error running configure on OPENMPI

***

  File "/global/home/danyangs/soft/petsc/petsc-3.14.6/config/configure.py", 
line 456, in petsc_configure

framework.configure(out = sys.stdout)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1253, in configure

self.processChildren()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1242, in processChildren

self.

Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-10 Thread Danyang Su
Hi Junchao,

 

The configuration is successful. The error comes from the last step when I run 

 

make PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.13.6 
PETSC_ARCH=linux-intel-openmpi check

 

***Error detected during compile or link!***

See http://www.mcs.anl.gov/petsc/documentation/faq.html

/home/danyangs/soft/petsc/petsc-3.13.6/src/snes/tutorials ex5f

*

mpif90 -fPIC -O3 -march=native -mtune=nativels   
-I/home/danyangs/soft/petsc/petsc-3.13.6/include 
-I/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/include  
ex5f.F90  
-Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-Wl,-rpath,/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-L/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib 
-Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 
-L/global/software/intel/composer_xe_2013_sp1.2.144/mkl/lib/intel64 
-Wl,-rpath,/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64
 -L/global/software/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64 
-Wl,-rpath,/global/software/openmpi-1.6.5/intel/lib64 
-L/global/software/openmpi-1.6.5/intel/lib64 
-Wl,-rpath,/global/software/intel/composerxe/mkl/lib/intel64 
-L/global/software/intel/composerxe/mkl/lib/intel64 
-Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
-L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
-Wl,-rpath,/global/software/intel/composerxe/lib/intel64 -lpetsc -lHYPRE 
-lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu 
-lflapack -lfblas -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 
-lparmetis -lmetis -lstdc++ -ldl -lmpi_f90 -lmpi_f77 -lmpi -lm -lnuma -lrt 
-lnsl -lutil -limf -lifport -lifcore -lsvml -lipgo -lintlc -lpthread -lgcc_s 
-lirc_s -lstdc++ -ldl -o ex5f

ifort: command line warning #10159: invalid argument for option '-m'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_allocate'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_attach'

/home/danyangs/soft/petsc/petsc-3.13.6/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_create_dynamic'

gmake[4]: *** [ex5f] Error 1

 

Thanks,

 

Danyang

 

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 3:57 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

You sent a wrong one. This configure.log was from a successful configuration. 
Note FOPTFLAGS="-O3 -march=native -mtune=nativels" looks suspicious.

 

--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 5:32 PM Danyang Su  wrote:

 

Hi Junchao,

 

Thanks for looking into this problem. The configuration log is attached.

 

All the best,

 

Danyang

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 2:36 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Hi, Danyang, 

 

Send the configure.log.  Also, PETSc does not need MPI_Win_allocate etc to 
work. I will have a look.


--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 2:47 PM Danyang Su  wrote:

Hi Barry,

 

I tried this option before but get ‘Error running configure on OpenMPI’

 

***

 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):

---

Error running configure on OPENMPI

***

  File "/global/home/danyangs/soft/petsc/petsc-3.14.6/config/configure.py", 
line 456, in petsc_configure

framework.configure(out = sys.stdout)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1253, in configure

self.processChildren()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1242, in processChildren

self.serialEvaluation(self.childGraph)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1217, in serialEvaluation

child.configure()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 1144, in configure

self.executeTest(self.configureLibrary)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/base.py",
 line 140, in executeTest

ret = test(*args,**kargs)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py&q

Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-10 Thread Danyang Su
Hi Junchao,

 

Thanks. I will test this branch and get back to you later.

 

All the best,

 

Danyang

 

From: Junchao Zhang 
Date: Saturday, April 10, 2021 at 3:32 PM
To: Danyang Su 
Cc: Barry Smith , "petsc-users@mcs.anl.gov" 

Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

Danyang,

  Could you try branch jczhang/fix-mpi3-win with your old configuration (i.e., 
use system mpicc)? Note the MR 3849 is based off latest petsc-3.15 release

  Thanks.

--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 4:36 PM Junchao Zhang  wrote:

Hi, Danyang, 

 

Send the configure.log.  Also, PETSc does not need MPI_Win_allocate etc to 
work. I will have a look.


--Junchao Zhang

 

 

On Sat, Apr 10, 2021 at 2:47 PM Danyang Su  wrote:

Hi Barry,

 

I tried this option before but get ‘Error running configure on OpenMPI’

 

***

 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):

---

Error running configure on OPENMPI

***

  File "/global/home/danyangs/soft/petsc/petsc-3.14.6/config/configure.py", 
line 456, in petsc_configure

framework.configure(out = sys.stdout)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1253, in configure

self.processChildren()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1242, in processChildren

self.serialEvaluation(self.childGraph)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1217, in serialEvaluation

child.configure()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 1144, in configure

self.executeTest(self.configureLibrary)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/base.py",
 line 140, in executeTest

ret = test(*args,**kargs)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 902, in configureLibrary

for location, directory, lib, incl in self.generateGuesses():

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 476, in generateGuesses

d = self.checkDownload()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/packages/OpenMPI.py",
 line 56, in checkDownload

return self.getInstallDir()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 365, in getInstallDir

installDir = self.Install()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/packages/OpenMPI.py",
 line 63, in Install

installDir = config.package.GNUPackage.Install(self)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 1667, in Install

raise RuntimeError('Error running configure on ' + self.PACKAGE)



Finishing configure run at Sat, 10 Apr 2021 11:57:20 -0700

========

 

Thanks,

 

Danyang

 

From: Barry Smith 
Date: Saturday, April 10, 2021 at 10:31 AM
To: Danyang Su 
Cc: "petsc-users@mcs.anl.gov" 
Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

 

  Depending on the network you can remove the ./configure options 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90  and use instead 
--with-cc=icc --with-cxx=icpc and--with-fc=ifort --download-openmpi 

 

  Barry

 

 

On Apr 10, 2021, at 12:18 PM, Danyang Su  wrote:

 

Dear PETSc developers and users,

 

I am trying to install the latest PETSc version on an ancient cluster. The 
OpenMPI version is 1.6.5 and Compiler is Intel 14.0, which are the newest on 
that cluster. I have no problem to install PETSc up to version 3.12.5. However, 
if I try to use PETSc 3.13+, there are three undefined reference errors in 
MPI_Win_allocate, MPI_Win_attach and MPI_Win_create_dynamic. I know these three 
functions are available from OpenMPI 2.0+. Because the cluster is not in 
technical support anymore, there is no way I can install new OpenMPI version or 
do some update. Is it possible to disable these three functions in PETSc 3.13+ 
version?

 

The errors occur in ‘make check’ step:

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_allocate'

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference

Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-10 Thread Danyang Su
Hi Barry,

 

I tried this option before but get ‘Error running configure on OpenMPI’

 

***

 UNABLE to CONFIGURE with GIVEN OPTIONS(see configure.log for 
details):

---

Error running configure on OPENMPI

***

  File "/global/home/danyangs/soft/petsc/petsc-3.14.6/config/configure.py", 
line 456, in petsc_configure

framework.configure(out = sys.stdout)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1253, in configure

self.processChildren()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1242, in processChildren

self.serialEvaluation(self.childGraph)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/framework.py",
 line 1217, in serialEvaluation

child.configure()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 1144, in configure

self.executeTest(self.configureLibrary)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/base.py",
 line 140, in executeTest

ret = test(*args,**kargs)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 902, in configureLibrary

for location, directory, lib, incl in self.generateGuesses():

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 476, in generateGuesses

d = self.checkDownload()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/packages/OpenMPI.py",
 line 56, in checkDownload

return self.getInstallDir()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 365, in getInstallDir

installDir = self.Install()

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/packages/OpenMPI.py",
 line 63, in Install

installDir = config.package.GNUPackage.Install(self)

  File 
"/global/home/danyangs/soft/petsc/petsc-3.14.6/config/BuildSystem/config/package.py",
 line 1667, in Install

raise RuntimeError('Error running configure on ' + self.PACKAGE)



Finishing configure run at Sat, 10 Apr 2021 11:57:20 -0700

============

 

Thanks,

 

Danyang

 

From: Barry Smith 
Date: Saturday, April 10, 2021 at 10:31 AM
To: Danyang Su 
Cc: "petsc-users@mcs.anl.gov" 
Subject: Re: [petsc-users] Undefined reference in PETSc 3.13+ with old MPI 
version

 

 

  Depending on the network you can remove the ./configure options 
--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90  and use instead 
--with-cc=icc --with-cxx=icpc and--with-fc=ifort --download-openmpi 

 

  Barry

 



On Apr 10, 2021, at 12:18 PM, Danyang Su  wrote:

 

Dear PETSc developers and users,

 

I am trying to install the latest PETSc version on an ancient cluster. The 
OpenMPI version is 1.6.5 and Compiler is Intel 14.0, which are the newest on 
that cluster. I have no problem to install PETSc up to version 3.12.5. However, 
if I try to use PETSc 3.13+, there are three undefined reference errors in 
MPI_Win_allocate, MPI_Win_attach and MPI_Win_create_dynamic. I know these three 
functions are available from OpenMPI 2.0+. Because the cluster is not in 
technical support anymore, there is no way I can install new OpenMPI version or 
do some update. Is it possible to disable these three functions in PETSc 3.13+ 
version?

 

The errors occur in ‘make check’ step:

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_allocate'

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_attach'

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_create_dynamic'

 

The configuration used is shown below:

./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 --download-mumps 
--download-scalapack --download-parmetis --download-metis 
--download-fblaslapack --download-hypre --download-superlu --download-hdf5=yes 
--with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 
-march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=nativels"

 

Thanks,

 

Danyang

 



[petsc-users] Undefined reference in PETSc 3.13+ with old MPI version

2021-04-10 Thread Danyang Su
Dear PETSc developers and users,

 

I am trying to install the latest PETSc version on an ancient cluster. The 
OpenMPI version is 1.6.5 and Compiler is Intel 14.0, which are the newest on 
that cluster. I have no problem to install PETSc up to version 3.12.5. However, 
if I try to use PETSc 3.13+, there are three undefined reference errors in 
MPI_Win_allocate, MPI_Win_attach and MPI_Win_create_dynamic. I know these three 
functions are available from OpenMPI 2.0+. Because the cluster is not in 
technical support anymore, there is no way I can install new OpenMPI version or 
do some update. Is it possible to disable these three functions in PETSc 3.13+ 
version?

 

The errors occur in ‘make check’ step:

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_allocate'

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_attach'

/home/dsu/soft/petsc/petsc-3.13.0/linux-intel-openmpi/lib/libpetsc.so: 
undefined reference to `MPI_Win_create_dynamic'

 

The configuration used is shown below:

 ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 
--download-mumps --download-scalapack --download-parmetis --download-metis 
--download-fblaslapack --download-hypre --download-superlu --download-hdf5=yes 
--with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 
-march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=nativels"

 

Thanks,

 

Danyang



Re: [petsc-users] Quite different behaviours of PETSc solver on different clusters

2020-10-29 Thread Danyang Su
Hi Matt,

No, interations from both linear and nonlinear solvers are similar. The system 
administrator doubt that the latency in mpich makes the difference. We will 
test a petsc version with OpenMPI on that cluster to check if it makes 
difference.

Thanks, 

Danyang

On October 29, 2020 6:05:53 p.m. PDT, Matthew Knepley  wrote:
>On Thu, Oct 29, 2020 at 3:04 PM Su,D.S. Danyang 
>wrote:
>
>> Dear PETSc users,
>>
>>
>>
>> This is a question bother me for some time. I have the same code
>running
>> on different clusters and both clusters have good speedup. However, I
>> noticed some thing quite strange. On one cluster, the solver is quite
>> stable in computing time while on another cluster, the solver is
>unstable
>> in computing time. As shown in the figure below, the local
>calculation
>> almost has no communication and the computing time in this part is
>quite
>> stable. However, PETSc solver on Cluster B jumps quite a lot and the
>> performance is not as good as Cluster A, even though the local
>calculation
>> is a little better on Cluster B. There are some difference on
>hardware and
>> PETSc configuration and optimization. Cluster A uses OpenMPI + GCC
>compiler
>> and Cluster B uses MPICH + GCC compiler. The number of processors
>used is
>> 128 on Cluster A and 120 on Cluster B. I also tested different number
>of
>> processors but the problem is the same. Does anyone have any idea
>which
>> part might cause this problem?
>>
>
>First question: Does the solver take more iterates when the time bumps
>up?
>
>  Thanks,
>
>Matt
>
>
>>
>>
>>
>>
>> Thanks and regards,
>>
>>
>>
>> Danyang
>>
>>
>>
>
>
>-- 
>What most experimenters take for granted before they begin their
>experiments is infinitely more interesting than any results to which
>their
>experiments lead.
>-- Norbert Wiener
>
>https://www.cse.buffalo.edu/~knepley/
>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-14 Thread Danyang Su
Hi Jed,

I cannot reproduce the same problem in C (attached example). The C code works 
fine on HDF 1.12.0 without problem. I am still confused why this happens.  
Anyway, will use HDF5-1.10.6 with PETSc-3.13 at this moment.

Thanks,

Danyang

On 2020-06-13, 1:39 PM, "Jed Brown"  wrote:

Can you reproduce in C?  You're missing three extra arguments that exist in 
the Fortran interface.

https://support.hdfgroup.org/HDF5/doc/RM/RM_H5D.html#Dataset-Create

    Danyang Su  writes:

> Hi Jed,
>
> Attached is the example for your test.  
>
> This example uses H5Sset_none to tell H5Dwrite call that there will be no 
data. 4-th process HAS to participate we are in a collective mode.
> The code is ported and modified based on the C example from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
>
> The compiling flags in the makefile are same as those used in my own code.
>
> To compile the code, please run 'make all'
> To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number 
of processors larger than 4 should help to detect the problem.
>
> The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
>
> The following platforms have been tested:
>   Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
>   Centos-7 + Intel2018 + HDF5-12.0  -> Works fine
>
> Possible error when code crashes
>  At line 6686 of file H5_gen.F90
>  Fortran runtime error: Index '1' of dimension 1 of array 'buf' above 
upper bound of 0
>
> Thanks,
>
> Danyang
>
> On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:
>
> Danyang Su  writes:
>
> > Hi Jed,
> >
> > Thanks for your double check. 
> >
> > The HDF 1.10.6 version also works. But versions from 1.12.x stop 
working.
>
> I'd suggest making a reduced test case in order to submit a bug 
report.
>
> This was the relevant change in PETSc for hdf5-1.12.
>
> 
https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612
>
> > Attached is the code section where I have problem.
> >
> > !c write the dataset collectively
> > 
!!!
> >  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO 
WRITE
> > 
!!!
> > call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, 
hdf5_dsize,   &
> > hdf5_ierr, file_space_id=filespace, 
   &
> > mem_space_id=memspace, xfer_prp = xlist_id)
> >
> > Please let me know if there is something wrong in the code that 
causes the problem.
>
> !c
> !c This example uses H5Sset_none to tell H5Dwrite call that 
> !c there will be no data. 4-th process HAS to participate since 
> !c we are in a collective mode.
> !c 
> !c The code is ported and modified based on the C example 
> !c from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
> !c by Danyang Su on June 12, 2020.
> !c
> !c To compile the code, please run 'make all'
> !c To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'
> !c
> !c IMPORTNANT NOTE
> !c The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
> !c 
> !c The following platforms have been tested:
> !c Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
> !c Centos-7 + Intel2018 + HDF5-12.0 -> Works fine
> !c
> !c Possible error when code crashes
> !c At line 6686 of file H5_gen.F90
> !c Fortran runtime error: Index '1' of dimension 1 of array 'buf' 
above upper bound of 0
> !c 
>
> program hdf5_zero_data
>
>
> #include 
>
>   use petscsys
>   use hdf5
>
>   implicit none 
>
>   character(len=10), parameter :: h5File_Name = "SDS_row.h5"
>   character(len=8), parameter :: DatasetName = "IntArray"
>   integer, parameter :

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-13 Thread Danyang Su
Hi Jed,

These three extra arguments in h5dcreate_f are optional so I am not sure if 
these three arguments cause the problem. The error is from h5dwrite_f where all 
the arguments are provided.

I will try to see if I can reproduce the problem in C.

Thanks,

Danyang

On 2020-06-13, 1:39 PM, "Jed Brown"  wrote:

Can you reproduce in C?  You're missing three extra arguments that exist in 
the Fortran interface.

https://support.hdfgroup.org/HDF5/doc/RM/RM_H5D.html#Dataset-Create

    Danyang Su  writes:

> Hi Jed,
>
> Attached is the example for your test.  
>
> This example uses H5Sset_none to tell H5Dwrite call that there will be no 
data. 4-th process HAS to participate we are in a collective mode.
> The code is ported and modified based on the C example from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
>
> The compiling flags in the makefile are same as those used in my own code.
>
> To compile the code, please run 'make all'
> To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number 
of processors larger than 4 should help to detect the problem.
>
> The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
>
> The following platforms have been tested:
>   Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
>   Centos-7 + Intel2018 + HDF5-12.0  -> Works fine
>
> Possible error when code crashes
>  At line 6686 of file H5_gen.F90
>  Fortran runtime error: Index '1' of dimension 1 of array 'buf' above 
upper bound of 0
>
> Thanks,
>
> Danyang
>
> On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:
>
> Danyang Su  writes:
>
> > Hi Jed,
> >
> > Thanks for your double check. 
> >
> > The HDF 1.10.6 version also works. But versions from 1.12.x stop 
working.
>
> I'd suggest making a reduced test case in order to submit a bug 
report.
>
> This was the relevant change in PETSc for hdf5-1.12.
>
> 
https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612
>
> > Attached is the code section where I have problem.
> >
> > !c write the dataset collectively
> > 
!!!
> >  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO 
WRITE
> > 
!!!
> > call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, 
hdf5_dsize,   &
> > hdf5_ierr, file_space_id=filespace, 
   &
> > mem_space_id=memspace, xfer_prp = xlist_id)
> >
> > Please let me know if there is something wrong in the code that 
causes the problem.
>
> !c
> !c This example uses H5Sset_none to tell H5Dwrite call that 
> !c there will be no data. 4-th process HAS to participate since 
> !c we are in a collective mode.
> !c 
> !c The code is ported and modified based on the C example 
> !c from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
> !c by Danyang Su on June 12, 2020.
> !c
> !c To compile the code, please run 'make all'
> !c To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'
> !c
> !c IMPORTNANT NOTE
> !c The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
> !c 
> !c The following platforms have been tested:
> !c Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
> !c Centos-7 + Intel2018 + HDF5-12.0 -> Works fine
> !c
> !c Possible error when code crashes
> !c At line 6686 of file H5_gen.F90
> !c Fortran runtime error: Index '1' of dimension 1 of array 'buf' 
above upper bound of 0
> !c 
>
> program hdf5_zero_data
>
>
> #include 
>
>   use petscsys
>   use hdf5
>
>   implicit none 
>
>   character(len=10), parameter :: h5File_Name = "SDS_row.h5"
>   character(len=8), parameter :: DatasetName = "IntArray"
>   integer

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-12 Thread Danyang Su
Hi Jed,

Attached is the example for your test.  

This example uses H5Sset_none to tell H5Dwrite call that there will be no data. 
4-th process HAS to participate we are in a collective mode.
The code is ported and modified based on the C example from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c

The compiling flags in the makefile are same as those used in my own code.

To compile the code, please run 'make all'
To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number of 
processors larger than 4 should help to detect the problem.

The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.

The following platforms have been tested:
  Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
  Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
  Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
  Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
  Centos-7 + Intel2018 + HDF5-12.0  -> Works fine

Possible error when code crashes
 At line 6686 of file H5_gen.F90
 Fortran runtime error: Index '1' of dimension 1 of array 'buf' above upper 
bound of 0

Thanks,

Danyang

On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:

Danyang Su  writes:

> Hi Jed,
>
> Thanks for your double check. 
>
> The HDF 1.10.6 version also works. But versions from 1.12.x stop working.

I'd suggest making a reduced test case in order to submit a bug report.

This was the relevant change in PETSc for hdf5-1.12.


https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612

> Attached is the code section where I have problem.
>
> !c write the dataset collectively
> !!!
>  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE
> !!!
> call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
> hdf5_ierr, file_space_id=filespace,&
> mem_space_id=memspace, xfer_prp = xlist_id)
>
> Please let me know if there is something wrong in the code that causes 
the problem.



hdf5_zero_data.F90
Description: Binary data


makefile
Description: Binary data


Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-12 Thread Danyang Su
Hi Jed,

Thanks for your double check. 

The HDF 1.10.6 version also works. But versions from 1.12.x stop working.

Attached is the code section where I have problem.

!c write the dataset collectively
!!!
 CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE
!!!
call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
hdf5_ierr, file_space_id=filespace,&
mem_space_id=memspace, xfer_prp = xlist_id)

Please let me know if there is something wrong in the code that causes the 
problem.

Thanks,

Danyang

On 2020-06-11, 8:32 PM, "Jed Brown"  wrote:

Danyang Su  writes:

> Hi Barry,
>
> The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it 
works fine on different platforms. So, it is more likely there is a bug in the 
latest HDF version.

I would double-check that you have not subtly violated a collective 
requirement in the interface, then report to upstream.



example.F90
Description: Binary data


Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi Barry,

The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works 
fine on different platforms. So, it is more likely there is a bug in the latest 
HDF version.

Thanks.

All the best,

Danyang



On June 11, 2020 5:58:28 a.m. PDT, Barry Smith  wrote:
>
>Are you making HDF5 calls that fail or is it PETSc routines calling
>HDF5 that fail? 
>
>Regardless it sounds like the easiest fix is to switch back to the
>previous HDF5 and wait for HDF5 to fix what sounds to be a bug.
>
>   Barry
>
>
>> On Jun 11, 2020, at 1:05 AM, Danyang Su  wrote:
>> 
>> Hi All,
>>  
>> Sorry to send the previous incomplete email accidentally. 
>>  
>> After updating to HDF5-1.12.0, I got some problem if some processors
>have no data to write or not necessary to write. Since parallel writing
>is collective, I cannot disable those processors from writing. For the
>old version, there seems no such problem. So far, the problem only
>occurs on Linux using GNU compiler. The same code has no problem using
>intel compiler or latest gnu compiler on MacOS. 
>>  
>> I have already included h5sselect_none in the code for those
>processors without data. But it does not take effect. The problem is
>documented in the following link (How do you write data when one
>process doesn't have or need to write data ?).
>>  https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata
><https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata>
>>  
>> Similar problem has also been reported on HDF Forum by others.
>> https://forum.hdfgroup.org/t/bug-on-hdf5-1-12-0-fortran-parallel/6864
><https://forum.hdfgroup.org/t/bug-on-hdf5-1-12-0-fortran-parallel/6864>
>>  
>> Any suggestion for that?
>>  
>> Thanks,
>>  
>> Danyang

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[petsc-users] FW: Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi All,

 

Sorry to send the previous incomplete email accidentally. 

 

After updating to HDF5-1.12.0, I got some problem if some processors have no 
data to write or not necessary to write. Since parallel writing is collective, 
I cannot disable those processors from writing. For the old version, there 
seems no such problem. So far, the problem only occurs on Linux using GNU 
compiler. The same code has no problem using intel compiler or latest gnu 
compiler on MacOS. 

 

I have already included h5sselect_none in the code for those processors without 
data. But it does not take effect. The problem is documented in the following 
link (How do you write data when one process doesn't have or need to write data 
?).

 https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata

 

Similar problem has also been reported on HDF Forum by others.

https://forum.hdfgroup.org/t/bug-on-hdf5-1-12-0-fortran-parallel/6864

 

Any suggestion for that?

 

Thanks,

 

Danyang



[petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi All,

 

After updating to HDF5-1.12.0, I got some problem if some processors have no 
data to write or not necessary to write. Since parallel writing is collective, 
I cannot disable those processors from writing. For the old version, there 
seems no such problem. So far, the problem only occurs on Linux using GNU 
compiler. The same code has no problem using intel compiler or latest gnu 
compiler on MacOS.

 

Looks like it is caused by zero memory space. However, as documented in 

 



Re: [petsc-users] Bug in ex14f.F90 when debug flags are used?

2020-06-05 Thread Danyang Su
Thanks, Satish.

Danyang

On 2020-06-05, 2:11 PM, "Satish Balay"  wrote:

VecGetArray() is for F77 - it relies on  out-of-bound access.

The safer call is  VecGetArrayF90()

Now that PETSc requires F90  - perhaps VecGetArray() should be deprecated 
[and all examples fixed to use VecGetArrayF90]..

Satish

On Fri, 5 Jun 2020, Danyang Su wrote:

> Hi All,
> 
>  
> 
> I have a question regarding the following example. 
> 
> 
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/tutorials/ex14f.F90.html
> 
>  
> 
>  
> 
> When debug flags are used in the make file, the code crashed with 
following error.
> 
>  
> 
> At line 335 of file ex14f.F90
> 
> Fortran runtime error: Index '-14450582413' of dimension 1 of array 'xx' 
below lower bound of 1
> 
>  
> 
> FFLAGS   = -g -fcheck=all -fbacktrace -Wall
> 
> CPPFLAGS = -g -fcheck=all -fbacktrace -Wall
> 
>  
> 
> Does this make sense?
> 
>  
> 
> Danyang
> 
> 





[petsc-users] Bug in ex14f.F90 when debug flags are used?

2020-06-05 Thread Danyang Su
Hi All,

 

I have a question regarding the following example. 

https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/tutorials/ex14f.F90.html

 

 

When debug flags are used in the make file, the code crashed with following 
error.

 

At line 335 of file ex14f.F90

Fortran runtime error: Index '-14450582413' of dimension 1 of array 'xx' below 
lower bound of 1

 

FFLAGS   = -g -fcheck=all -fbacktrace -Wall

CPPFLAGS = -g -fcheck=all -fbacktrace -Wall

 

Does this make sense?

 

Danyang



Re: [petsc-users] Agglomeration for Multigrid on Unstructured Meshes

2020-06-01 Thread Danyang Su
Thanks Jed for the quick response. Yes I am asking about the repartitioning of 
coarse grids in geometric multigrid for unstructured mesh. I am happy with AMG. 
Thanks for letting me know.

Danyang

On 2020-06-01, 1:47 PM, "Jed Brown"  wrote:

I assume you're talking about repartitioning of coarse grids in
geometric multigrid -- that hasn't been implemented.


https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCTELESCOPE.html

But you can use an algebraic multigrid that does similar communicator
reduction, and can be applied to the original global problem or just on
the "coarse" problem of an initial geometric hierarchy.

Danyang Su  writes:

> Dear All,
>
>  
>
> I recalled there was a presentation ‘Extreme-scale multigrid components 
with PETSc’  taling about agglomeration in parallel multigrid, with future plan 
to extend to support unstructured meshes. Is this under development or to be 
added? 
>
>  
>
> Thanks and regards,
>
>  
>
> Danyang




[petsc-users] Agglomeration for Multigrid on Unstructured Meshes

2020-06-01 Thread Danyang Su
Dear All,

 

I recalled there was a presentation ‘Extreme-scale multigrid components with 
PETSc’  taling about agglomeration in parallel multigrid, with future plan to 
extend to support unstructured meshes. Is this under development or to be 
added? 

 

Thanks and regards,

 

Danyang



Re: [petsc-users] Domain decomposition using DMPLEX

2019-11-26 Thread Danyang Su

On 2019-11-26 10:18 a.m., Matthew Knepley wrote:
On Tue, Nov 26, 2019 at 11:43 AM Danyang Su <mailto:danyang...@gmail.com>> wrote:


On 2019-11-25 7:54 p.m., Matthew Knepley wrote:

On Mon, Nov 25, 2019 at 6:25 PM Swarnava Ghosh
mailto:swarnav...@gmail.com>> wrote:

Dear PETSc users and developers,

I am working with dmplex to distribute a 3D unstructured mesh
made of tetrahedrons in a cuboidal domain. I had a few queries:
1) Is there any way of ensuring load balancing based on the
number of vertices per MPI process.


You can now call DMPlexRebalanceSharedPoints() to try and get
better balance of vertices.


Hi Matt,

I just want to follow up if this new function can help to solve
the "Strange Partition in PETSc 3.11" problem I mentioned before.
Would you please let me know when shall I call this function?
Right before DMPlexDistribute?

This is not the problem. I believe the problem is that you are 
partitioning hybrid cells, and the way we handle
them internally changed, which I think screwed up the dual mesh for 
partitioning in your example. I have been

sick, so I have not gotten to your example yet, but I will.


Hope you are getting well soon. The mesh is not hybrid, only prism cells 
layer by layer. But the height of the prism varies significantly.


Thanks,

Danyang



  Sorry about that,

    Matt

call DMPlexCreateFromCellList

call DMPlexGetPartitioner

call PetscPartitionerSetFromOptions

call DMPlexDistribute

Thanks,

Danyang


2) As the global domain is cuboidal, is the resulting domain
decomposition also cuboidal on every MPI process? If not, is
there a way to ensure this? For example in DMDA, the default
domain decomposition for a cuboidal domain is cuboidal.


It sounds like you do not want something that is actually
unstructured. Rather, it seems like you want to
take a DMDA type thing and split it into tets. You can get a
cuboidal decomposition of a hex mesh easily.
Call DMPlexCreateBoxMesh() with one cell for every process,
distribute, and then uniformly refine. This
will not quite work for tets since the mesh partitioner will tend
to violate that constraint. You could:

  a) Prescribe the distribution yourself using the Shell
partitioner type

or

  b) Write a refiner that turns hexes into tets

We already have a refiner that turns tets into hexes, but we
never wrote the other direction because it was not clear
that it was useful.

  Thanks,

     Matt

Sincerely,
SG



-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/
<http://www.cse.buffalo.edu/~knepley/>




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 
<http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-users] Domain decomposition using DMPLEX

2019-11-26 Thread Danyang Su

On 2019-11-25 7:54 p.m., Matthew Knepley wrote:
On Mon, Nov 25, 2019 at 6:25 PM Swarnava Ghosh > wrote:


Dear PETSc users and developers,

I am working with dmplex to distribute a 3D unstructured mesh made
of tetrahedrons in a cuboidal domain. I had a few queries:
1) Is there any way of ensuring load balancing based on the number
of vertices per MPI process.


You can now call DMPlexRebalanceSharedPoints() to try and get better 
balance of vertices.


Hi Matt,

I just want to follow up if this new function can help to solve the 
"Strange Partition in PETSc 3.11" problem I mentioned before. Would you 
please let me know when shall I call this function? Right before 
DMPlexDistribute?


call DMPlexCreateFromCellList

call DMPlexGetPartitioner

call PetscPartitionerSetFromOptions

call DMPlexDistribute

Thanks,

Danyang


2) As the global domain is cuboidal, is the resulting domain
decomposition also cuboidal on every MPI process? If not, is there
a way to ensure this? For example in DMDA, the default domain
decomposition for a cuboidal domain is cuboidal.


It sounds like you do not want something that is actually 
unstructured. Rather, it seems like you want to
take a DMDA type thing and split it into tets. You can get a cuboidal 
decomposition of a hex mesh easily.
Call DMPlexCreateBoxMesh() with one cell for every process, 
distribute, and then uniformly refine. This
will not quite work for tets since the mesh partitioner will tend to 
violate that constraint. You could:


  a) Prescribe the distribution yourself using the Shell partitioner type

or

  b) Write a refiner that turns hexes into tets

We already have a refiner that turns tets into hexes, but we never 
wrote the other direction because it was not clear

that it was useful.

  Thanks,

     Matt

Sincerely,
SG



--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-users] DMPlex memory problem in scaling test

2019-10-10 Thread Danyang Su via petsc-users

Hi Matt,

My previous test is terminated after calling subroutine A as shown below.

>> In Subroutine A

  call DMPlexDistribute(dmda_flow%da,stencil_width,    &
    PETSC_NULL_SF,distributedMesh,ierr)
  CHKERRQ(ierr)

  if (distributedMesh /= PETSC_NULL_DM) then

    call DMDestroy(dmda_flow%da,ierr)
    CHKERRQ(ierr)
    !c set the global mesh as distributed mesh
    dmda_flow%da = distributedMesh

    call DMDestroy(distributedMesh,ierr)

   !If DMDestroy(distributedMesh,ierr) called, then everything is 
destroyed and there is nothing output with -malloc_test. However, I got 
error in the next subroutine [0]PETSC ERROR: DMGetCoordinatesLocal() 
line 5545 in /home/dsu/Soft/PETSc/petsc-dev/src/dm/interface/dm.c Object 
already free: Parameter # 1


    CHKERRQ(ierr)

 end if

>> In Subroutine B

  !c get local mesh DM and set coordinates

  call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
  CHKERRQ(ierr)

  call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
  CHKERRQ(ierr)

Thanks,

Danyang


On 2019-10-10 6:15 p.m., Matthew Knepley wrote:


On Thu, Oct 10, 2019 at 9:00 PM Danyang Su <mailto:danyang...@gmail.com>> wrote:



Labels should be destroyed with the DM. Just make a small code
that does nothing but distribute the mesh and end. If you
run with -malloc_test you should see if everythign is destroyed
properly.

  Thanks,

    Matt


Attached is the output run with -malloc_test using 2 processor.
It's a big file. How can I quick check if something is not
properly destroyed?

Everything output has not been destroyed. It looks like you did not 
destroy the distributed DM.


  Thanks,

    Matt

Thanks,

Danyang

--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 
<http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-users] Makefile change for PETSc3.12.0???

2019-10-02 Thread Danyang Su via petsc-users

On 2019-10-02 11:00 a.m., Balay, Satish wrote:

Can you retry with this fix:

https://gitlab.com/petsc/petsc/commit/3ae65d51d08dba2e118033664acfd64a46c9bf1d

[You can use maint branch for it]

Satish


This works. Thanks.

Danyang



On Wed, 2 Oct 2019, Danyang Su via petsc-users wrote:


Dear All,

I installed PETSc3.12.0 version and got problem in compiling my code (Fortran
and C++). The code and makefile are the same as I used for previous PETSc
version.

The error information seems the make command does not know the compiler
information. I tested this on two linux workstations and both return the same
error.

make: *** No rule to make target '../../usg/math_common.o', needed by 'exe'.
Stop.

The makefile I use is shown below:

#PETSc variables for development version, version V3.6.0 and later
include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules

CFLAGS =
CXXFLAGS = -std=c++11 -O3
CPPFLAGS = -DUSECGAL_NO
FFLAGS = -frounding-math -O3
FPPFLAGS = -DLINUX -DRELEASE -DRELEASE_X64 -DPETSC -DPETSC_HAVE_MUMPS
-DPETSC_HAVE_SUPERLU
CLEANFILES = executable-linux

SRC =./../../

OBJS = $(SRC)usg/math_common.o\
     $(SRC)usg/geometry_definition.o\
     ...
     $(SRC)updtrootdensity.o

exe: $(OBJS) chkopts
     -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS) -o executable-linux $(OBJS)
${PETSC_LIB}

Any idea on this?

Thanks,

Danyang



Re: [petsc-users] Error running configure on SOWING

2019-09-10 Thread Danyang Su via petsc-users

Hi Barry,

With both --download-sowing-cc= and --download-sowing-cxx= specified, it 
can be configured now.


Thanks as always for all your help,

Danyang

On 2019-09-10 11:19 a.m., Smith, Barry F. wrote:

   Ahh, sorry it also needs the C++ compiler provided with 
--download-sowing-cxx= something



On Sep 10, 2019, at 1:11 PM, Danyang Su  wrote:

Sorry I forgot to attached the log file.

Attached are the log files using the following configuration:

./configure COPTFLAGS="-march=native -O2" CXXOPTFLAGS="-march=native -O2" 
FOPTFLAGS="-march=native -O2" --with-scalapack=1 
--with-scalapack-lib="[/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so,/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so]"
 --download-parmetis=1 --download-metis=1 --download-ptscotch=1 --download-fblaslapack=1 --download-hypre=1 
--download-superlu_dist=1 --with-hdf5=1 
--with-hdf5-dir=/scinet/niagara/software/2019a/opt/intel-2019.1-intelmpi-2019.1/hdf5-mpi/1.10.4 --download-zlib=1 
--download-szlib=1 --download-ctetgen=1 --with-debugging=0 --with-cxx-dialect=C++11 
--with-mpi-dir=/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/mpi/intel64 
--download-sowing-cc=/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/bin/intel64/icc

Thanks,

Danyang

On 2019-09-10 11:03 a.m., Smith, Barry F. wrote:

   Please send the configure.log file when run with 
--download-sowing-cc=yourCcompiler  and also 
$PETSC_ARCH/externalpackages/git.sowing/config.log this will tell us why it is 
rejecting the C compiler.

    Barry



On Sep 10, 2019, at 12:43 PM, Danyang Su via petsc-users 
 wrote:

Dear All,

I am trying to install petsc-dev on a cluster with intel compiler. However, the 
configuration get stuck on SOWING.

Error running configure on SOWING: Could not execute "['./configure 
--prefix=/home/m/min3p/danyangs/soft/petsc/petsc-dev/linux-intel-opt']":
checking for ranlib... ranlib
checking for a BSD-compatible install... /usr/bin/install -c
checking whether install works... yes
checking for ar... ar
checking for gcc... no
checking for cc... no
checking for cl.exe... noconfigure: error: in 
`/gpfs/fs1/home/m/min3p/danyangs/soft/petsc/petsc-dev/linux-intel-opt/externalpackages/git.sowing':
configure: error: no acceptable C compiler found in $PATH
See `config.log' for more details

Actually the C compiler is there.

If I use GNU compiler, there is no problem. I also tried to use different 
sowing configuration as discussed on 
https://lists.mcs.anl.gov/pipermail/petsc-dev/2018-June/023070.html, but 
without success.

The configuration is

./configure COPTFLAGS="-march=native -O2" CXXOPTFLAGS="-march=native -O2" 
FOPTFLAGS="-march=native -O2" --with-scalapack=1 
--with-scalapack-lib="[/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64/libmkl_scalapack_lp64.so,/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so]"
 --download-parmetis=1 --download-metis=1 --download-ptscotch=1 --download-fblaslapack=1 --download-hypre=1 
--download-superlu_dist=1 --with-hdf5=1 
--with-hdf5-dir=/scinet/niagara/software/2019a/opt/intel-2019.1-intelmpi-2019.1/hdf5-mpi/1.10.4 --download-zlib=1 
--download-szlib=1 --download-ctetgen=1 --with-debugging=0 --with-cxx-dialect=C++11 
--with-mpi-dir=/scinet/niagara/intel/2019.1/compilers_and_libraries_2019.1.144/linux/mpi/intel64 -download-sowing

Any suggestion on this?

Thanks and regards,

danyang







[petsc-users] Error in creating compressed data using HDF5

2019-09-02 Thread Danyang Su via petsc-users

Dear All,

Not sure if this is the right place to ask hdf5 question. I installed 
hdf5 through PETSc configuration --download-hdf5=yes. The code runs 
without problem except the function to create compressed data (red part 
shown below).


    !c create local memory space and hyperslab
    call h5screate_simple_f(hdf5_ndim, hdf5_dsize, memspace,   &
    hdf5_ierr)
    call h5sselect_hyperslab_f(memspace, H5S_SELECT_SET_F, &
   hdf5_offset, hdf5_count, hdf5_ierr, &
   hdf5_stride, hdf5_block)

    !c create the global file space and hyperslab
    call h5screate_simple_f(hdf5_ndim,hdf5_gdsize,filespace, &
    hdf5_ierr)
    call h5sselect_hyperslab_f(filespace, H5S_SELECT_SET_F,    &
   hdf5_goffset, hdf5_count, hdf5_ierr,    &
   hdf5_stride, hdf5_block)

    !c create a data chunking property
    call h5pcreate_f(H5P_DATASET_CREATE_F, chunk_id, hdf5_ierr)
    call h5pset_chunk_f(chunk_id, hdf5_ndim, hdf5_csize, hdf5_ierr)

    !c create compressed data, dataset must be chunked for compression
    !c the following cause crash in hdf5 library, check when new
    !c hdf5 version is available

    ! Set ZLIB / DEFLATE Compression using compression level 6.
    ! To use SZIP Compression comment out these lines.
    !call h5pset_deflate_f(chunk_id, 6, hdf5_ierr)

    ! Uncomment these lines to set SZIP Compression
    !szip_options_mask = H5_SZIP_NN_OM_F
    !szip_pixels_per_block = 16
    !call H5Pset_szip_f(chunk_id, szip_options_mask,    &
    !   szip_pixels_per_block, hdf5_ierr)

    !c create the dataset id
    call h5dcreate_f(group_id, dataname, H5T_NATIVE_INTEGER,   &
 filespace, dset_id, hdf5_ierr,    &
 dcpl_id=chunk_id)

    !c create a data transfer property
    call h5pcreate_f(H5P_DATASET_XFER_F, xlist_id, hdf5_ierr)
    call h5pset_dxpl_mpio_f(xlist_id, H5FD_MPIO_COLLECTIVE_F,  &
    hdf5_ierr)

    !c write the dataset collectively
    call h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, dataset, hdf5_dsize,  &
    hdf5_ierr, file_space_id=filespace,    &
    mem_space_id=memspace, xfer_prp = xlist_id)

    call h5dclose_f(dset_id, hdf5_ierr)

    !c close resources
    call h5sclose_f(filespace, hdf5_ierr)
    call h5sclose_f(memspace, hdf5_ierr)
    call h5pclose_f(chunk_id, hdf5_ierr)
    call h5pclose_f(xlist_id, hdf5_ierr)


Both h5pset_deflate_f and H5Pset_szip_f crashes the code with error 
information as shown below. If I comment out h5pset_deflate_f and 
H5Pset_szip_f, then everything works fine.


HDF5-DIAG: Error detected in HDF5 (1.8.18) MPI-process 0:
  #000: H5D.c line 194 in H5Dcreate2(): unable to create dataset
    major: Dataset
    minor: Unable to initialize object
  #001: H5Dint.c line 455 in H5D__create_named(): unable to create and 
link to dataset

    major: Dataset
    minor: Unable to initialize object
  #002: H5L.c line 1638 in H5L_link_object(): unable to create new link 
to object

    major: Links
    minor: Unable to initialize object
  #003: H5L.c line 1882 in H5L_create_real(): can't insert link
    major: Symbol table
    minor: Unable to insert object
  #004: H5Gtraverse.c line 861 in H5G_traverse(): internal path 
traversal failed

    major: Symbol table
    minor: Object not found

Does anyone encounter this kind of error before?

Kind regards,

Danyang



Re: [petsc-users] different dof in DMDA creating

2019-08-15 Thread Danyang Su via petsc-users

Hi Barry and Matt,

Would you please give me some advice on the functions I need to use to 
set different dof to the specified nodes. For now, I use 
DMPlexCreateSection that dof is uniform throughout the domain. I am a 
bit lost in choosing the right DMPlex functions, unfortunately.


Thanks and regards,

Danyang

On 2019-02-07 1:53 p.m., Danyang Su wrote:

Thanks, Barry. DMPlex also works for my code.

Danyang

On 2019-02-07 1:14 p.m., Smith, Barry F. wrote:

   No, you would need to use the more flexible DMPlex


On Feb 7, 2019, at 3:04 PM, Danyang Su via petsc-users 
 wrote:


Dear PETSc Users,

Does DMDA support different number of degrees of freedom for 
different node? For example I have a 2D subsurface flow problem with 
the default dof = 1 throughout the domain. Now I want to add some 
sparse fractures in the domain. For the nodes connected to the 
sparse fractures, I want to set dof to 2. Is it possible to set dof 
to 2 for those nodes only?


Thanks,

Danyang



[petsc-users] Strange compiling error in DMPlexDistribute after updating PETSc to V3.11.0

2019-04-05 Thread Danyang Su via petsc-users

Hi All,

I got a strange error in calling DMPlexDistribute after updating PETSc 
to V3.11.0. There sounds no change in the interface of DMPlexDistribute 
as documented in


https://www.mcs.anl.gov/petsc/petsc-3.10/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute

https://www.mcs.anl.gov/petsc/petsc-3.11/docs/manualpages/DMPLEX/DMPlexDistribute.html#DMPlexDistribute

The code section is shown below.

  !c distribute mesh over processes
  call 
DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)


When I use PETSc V3.10 and earlier versions, it works fine. After 
updating to latest PETSc V3.11.0, I got the following error during compiling


  call 
DMPlexDistribute(dm,stencil_width,PETSC_NULL_SF,distributed_dm,ierr)

 1
Error: Non-variable expression in variable definition context (actual 
argument to INTENT = OUT/INOUT) at (1)
/home/dsu/Soft/PETSc/petsc-3.11.0/linux-gnu-opt/lib/petsc/conf/petscrules:31: 
recipe for target '../../solver/solver_ddmethod.o' failed


The fortran example 
/home/dsu/Soft/PETSc/petsc-3.11.0/src/dm/label/examples/tutorials/ex1f90.F90 
which also uses DMPlexDistribute can be compiled without problem. Is 
there any updates in the compiler flags I need to change?


Thanks,

Danyang




Re: [petsc-users] Installation error on macOS Mojave using GNU compiler

2019-01-04 Thread Danyang Su via petsc-users



On 2019-01-03, 4:59 PM, "Balay, Satish"  wrote:

On Thu, 3 Jan 2019, Matthew Knepley via petsc-users wrote:

> On Thu, Jan 3, 2019 at 7:02 PM Danyang Su via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> 
> > Hi All,
> >
> > I am trying to install PETSc on macOS Mojave using GNU compiler.
> > First, I tried the debug version using the following configuration and 
it
> > works fine.
> >
> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
> > --download-mpich --download-scalapack --download-parmetis 
--download-metis
> > --download-ptscotch --download-fblaslapack --download-hypre
> > --download-superlu_dist --download-hdf5=yes --download-ctetgen
> >
> > After testing debug version, I reconfigured PETSc with optimization 
turned
> > on using the following configuration. However, I got error during this 
step.
> >
> 
> Your optimization flags are not right because the compiler is producing
> AVX-512, but your linker cannot handle it. However, it looks like
> it might be that your Fortran compiler can't handle it. Do you need
> Fortran? If not, turn it off (configure with --with-fc=0) and try again.

Or use 'FOPTFLAGS=-O3' [if its indeed fortran sources causing grief]

If '-march=native -mtune=native' gives compiler/linker errors - don't use 
them.

Satish

Hi Satish,

After removing "-march=native -mtune=native", it works now.

Thanks,

Danyang

> 
>   Thanks,
> 
> Matt
> 
> 
> > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
> > --with-cxx-dialect=C++11 --download-mpich --download-scalapack
> > --download-parmetis --download-metis --download-ptscotch
> > --download-fblaslapack --download-hypre --download-superlu_dist
> > --download-hdf5=yes --download-ctetgen --with-debugging=0 COPTFLAGS="-O3
> > -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native 
-mtune=native"
> > FOPTFLAGS="-O3 -march=native -mtune=native"
> >
> > The error information is
> >
> > ctoolchain/usr/bin/ranlib: file: .libs/libmpl.a(mpl_dbg.o) has no 
symbols
> > 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
> > file: .libs/libmpl.a(mpl_dbg.o) has no symbols
> > 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib:
> > file: .libs/libmpl.a(mpl_dbg.o) has no symbols
> > /var/folders/jm/wcm4mv8s3v1gqz383tcf_4c0gp/T//ccrlfFuo.s:14:2: 
error:
> > instruction requires: AVX-512 ISA AVX-512 VL ISA
> > vmovdqu64   (%rdi), %xmm0
> > ^
> > make[2]: *** [src/binding/fortran/use_mpi/mpi_constants.mod-stamp] 
Error 1
> > make[2]: *** Waiting for unfinished jobs
> > make[1]: *** [all-recursive] Error 1
> > make: *** [all] Error 2
> >
> >
> > Thanks,
> >
> > Danyang
> >
> 
> 
> 






Re: [petsc-users] [petsc-maint] Fwd: DMPlex global to natural problem using DmPlexGetVertexNumbering or DMPlexGlobalToNatural

2018-11-29 Thread Danyang Su via petsc-users


On 18-11-29 06:13 PM, Matthew Knepley wrote:
On Thu, Nov 29, 2018 at 7:40 PM Danyang Su via petsc-maint 
mailto:petsc-ma...@mcs.anl.gov>> wrote:


Dear PETSc developers & users,

Sorry to bother you again. I just encounter some difficulties in
DMPlex
global to natural order. This is not the very necessary function
in my
code. But it's best to have it in case someone wants to feed code
with
initial conditions or parameters from external file using natural
ordering.

First I tried to use DmPlexGetVertexNumbering which seems pretty
straightforward, but I always get error saying "You need a ISO C
conforming compiler to use the glibc headers". I use Gfortran on
Linux.

Then I switched to use Label to save global-natural order before
distributing the mesh, everything works fine. The only problem is
that
it takes a very long time during mesh distribution when the size
of mesh
is very big.

Finally I tried to use GlobalToNatural ordering. Set Natural order to
TRUE before calling DMPlexDistribute and then call
DMPlexSetMigrationSF.
However, the function DMPlexGlobalToNaturalEnd seems doing nothing as
the returned vector is always unchanged. I got some help from Josh
who
has done similar work before. But I still cannot figure out what is
wrong in the code.


Can you run the example that Blaise recommended? Sometimes its easiest 
to start

with an example and change it into your code.

I cannot compile ex26 due to missing exodusII.h file. Error information is
make ex26
/home/dsu/Soft/PETSc/petsc-3.10.2/linux-gnu-dbg/bin/mpicc -o ex26.o -c 
-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fstack-protector -fvisibility=hidden -g3 
-I/home/dsu/Soft/PETSc/petsc-3.10.2/include 
-I/home/dsu/Soft/PETSc/petsc-3.10.2/linux-gnu-dbg/include `pwd`/ex26.c
/home/dsu/Soft/PETSc/petsc-3.10.2/src/dm/impls/plex/examples/tests/ex26.c:10:22: 
fatal error: exodusII.h: No such file or directory

compilation terminated.
/home/dsu/Soft/PETSc/petsc-3.10.2/lib/petsc/conf/rules:359: recipe for 
target 'ex26.o' failed


Anyway, I will look at this example to get familiar with GlobalToNatural 
related functions.


Thanks,

Danyang



  Thanks,

Matt

Attached is the code section with related functions included,
together
with a mesh and output of global vector and natural vector. If
something
is wrong in the code, it should be in these four functions. The
related
functions are called in the following order.

call solver_dd_create_dmplex

call solver_dd_mapping_set_dmplex

call solver_dd_DMDACreate_flow

!call solver_dd_DMDACreate_reactNot used in current testing

call solver_dd_mapping_global_natural

I really appreciate your help.

Regards,

Danyang



--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 
<http://www.cse.buffalo.edu/%7Eknepley/>




Re: [petsc-users] Makefile for mixed C++ and Fortran code

2018-06-03 Thread Danyang Su

Hi Satish,

Your makefile works.

Please ignore the CGAL dependency. I just modified from CGAL example and 
it needs to be further optimized with the dependency that is required 
for the algorithm I use.


Thanks,

Danyang


On 18-06-02 02:51 PM, Satish Balay wrote:

Try the attached makefile. [with correct PETSC_DIR and PETSC_ARCH values]

If you have issues - send the complete makefiles - and complete error log..


On Sat, 2 Jun 2018, Danyang Su wrote:


Hi Barry,

For the code without PETSc, the rules used to compile the code with CGAL is

note: DLIB  can probably be simlified - and a fewof the options eliminated.



DLIB = -lstdc++ -lmetis -lm -L/usr/local/lib -rdynamic

-lstdc++ is setup by petsc

-lmetis can be a dependency of petsc - so care should be taken to have only one 
copy of metis


/usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so

Why are these here? Normally these are dependencies of gcc/gfortran [and PETSc 
configure picks up the correct ones]


/usr/local/lib/libCGAL_ImageIO.so.11.0.1 /usr/local/lib/libCGAL.so.11.0.1

I have no idea what these are..

Satish


/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so -lpthread
-lGLU -lGL -lX11 -lXext -lz /usr/local/lib/libCGAL_ImageIO.so.11.0.1
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz
/usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so -lpthread
-Wl,-rpath,/usr/local/lib

FFLAGS = -O3 -I$(LIS_INC)
CXXFLAGS = -std=c++11 -O3 -I$(LIS_INC)

However, after adding these to the makefile using PETSc, I got error telling
me that all the petsc include files cannot be found.

../../solver/solver_snes_common.F90:27:0: fatal error: petscversion.h: No such
\
file or directory
  #include 

../../solver/solver_snes_common.F90:27:0: fatal error: petscversion.h: No such
\
file or directory
  #include 

Similar for other head files. However, If I change the file path to the the
full path, the code cannot compiled. Does the rule I use destroy the PETSc
relative path/

The make commands I use is

executable: $(SOURCES) chkopts
 -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out
$(SOURCES) ${PETSC_LIB} ${LIS_LIB} ${DLIB}
%.o:%.F90
 $(FLINKER) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
 $(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

Thanks,

Danyang

On 18-06-01 10:41 AM, Smith, Barry F. wrote:

 You need to determine exactly what flags are passed to the C++ compiler
 for your compile that works and make sure those same flags are used in
 "PETSc version" of the makefile. You could add the flags directly to the
 rule


%.o:%.cpp
 $(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

Barry



On Jun 1, 2018, at 12:37 PM, Danyang Su  wrote:

Follow up:

With following command

executable: $(SOURCES) chkopts
 -${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out
 $(SOURCES) ${PETSC_LIB}

%.o:%.F90
 $(FLINKER) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
 $(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

The compiler return error: no match function.

../../usg/cgal_triangulation_2d.cpp: In function ‘void
outputTriangulation2d(in\
t, const char*, int, const char*)’:
../../usg/cgal_triangulation_2d.cpp:485:20: error: no matching function for
cal\
l to ‘std::basic_ofstream::open(std::string&)’
 out.open(strfile);

Thanks,


Danyang

On 18-06-01 10:07 AM, Danyang Su wrote:

Hi All,

My code needs to link to an external C++ library (CGAL). The code is
written in Fortran and I have already written interface to let Fortran
call C++ function. For the sequential version without PETSc, it can be
compiled without problem using the following makefile. The parallel
version without CGAL can also be compiled successfully. However, when I
tried to use PETSc together with CGAL library, I cannot compile the code.
My questions is: How can I modify the makefile? Do I need to reconfigure
PETSc with special flags? All the makefile samples are shown below.

#makefile for sequential version

FC = gfortran
#FC = ifort
CXX = g++ -std=c++11

DLIB = -lstdc++ -lm -L/usr/local/lib -rdynamic /usr/local/lib/libmpfr.so
/usr/local/lib/libgmp.so /usr/local/lib/libCGAL_ImageIO.so.11.0.1
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 /usr/local/lib/libCGAL.so.11.0.1
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so
-lpthread -lGLU -lGL -lX11 -lXext -lz /usr/local/lib/libmpfr.so
/usr/local/lib/libgmp.so /usr/local/lib/libboost_thread.so
/usr/local/lib/libboost_system.so -lpthread -Wl,-rpath,/usr/local/lib

FFLAGS = -O3
CXXFLAGS = -O3

FPPFLAGS =  -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/

Re: [petsc-users] Makefile for mixed C++ and Fortran code

2018-06-02 Thread Danyang Su

Hi Barry,

For the code without PETSc, the rules used to compile the code with CGAL is

DLIB = -lstdc++ -lmetis -lm -L/usr/local/lib -rdynamic 
/usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz 
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz 
/usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so 
-lpthread -Wl,-rpath,/usr/local/lib


FFLAGS = -O3 -I$(LIS_INC)
CXXFLAGS = -std=c++11 -O3 -I$(LIS_INC)

However, after adding these to the makefile using PETSc, I got error 
telling me that all the petsc include files cannot be found.


../../solver/solver_snes_common.F90:27:0: fatal error: petscversion.h: 
No such \

file or directory
 #include 

../../solver/solver_snes_common.F90:27:0: fatal error: petscversion.h: 
No such \

file or directory
 #include 

Similar for other head files. However, If I change the file path to the 
the full path, the code cannot compiled. Does the rule I use destroy the 
PETSc relative path/


The make commands I use is

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out 
$(SOURCES) ${PETSC_LIB} ${LIS_LIB} ${DLIB}

%.o:%.F90
$(FLINKER) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

Thanks,

Danyang

On 18-06-01 10:41 AM, Smith, Barry F. wrote:

You need to determine exactly what flags are passed to the C++ compiler for your 
compile that works and make sure those same flags are used in "PETSc version" 
of the makefile. You could add the flags directly to the rule


%.o:%.cpp
$(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

   Barry



On Jun 1, 2018, at 12:37 PM, Danyang Su  wrote:

Follow up:

With following command

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out $(SOURCES) 
${PETSC_LIB}

%.o:%.F90
$(FLINKER) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

The compiler return error: no match function.

../../usg/cgal_triangulation_2d.cpp: In function ‘void outputTriangulation2d(in\
t, const char*, int, const char*)’:
../../usg/cgal_triangulation_2d.cpp:485:20: error: no matching function for cal\
l to ‘std::basic_ofstream::open(std::string&)’
out.open(strfile);

Thanks,


Danyang

On 18-06-01 10:07 AM, Danyang Su wrote:

Hi All,

My code needs to link to an external C++ library (CGAL). The code is written in 
Fortran and I have already written interface to let Fortran call C++ function. 
For the sequential version without PETSc, it can be compiled without problem 
using the following makefile. The parallel version without CGAL can also be 
compiled successfully. However, when I tried to use PETSc together with CGAL 
library, I cannot compile the code. My questions is: How can I modify the 
makefile? Do I need to reconfigure PETSc with special flags? All the makefile 
samples are shown below.

#makefile for sequential version

FC = gfortran
#FC = ifort
CXX = g++ -std=c++11

DLIB = -lstdc++ -lm -L/usr/local/lib -rdynamic /usr/local/lib/libmpfr.so 
/usr/local/lib/libgmp.so /usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz 
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 /usr/local/lib/libCGAL.so.11.0.1 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so -lpthread 
-lGLU -lGL -lX11 -lXext -lz /usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so -lpthread 
-Wl,-rpath,/usr/local/lib

FFLAGS = -O3
CXXFLAGS = -O3

FPPFLAGS =  -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES)
$(FC) $(FFLAGS) $(FPPFLAGS) -o executable.out $(SOURCES) ${LIS_LIB} $(DLIB)
%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


#makefile for parallel version with PETSc, without CGAL

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out $(SOURCES) 
${PETSC_LIB}


#makefile for parallel version with PETSc, with CGAL, CANNOT work

#FC = ifort
#FC = gfortran

Re: [petsc-users] Makefile for mixed C++ and Fortran code

2018-06-01 Thread Danyang Su

On 18-06-01 10:29 AM, Smith, Barry F. wrote:

   What happens if you add ${DLIB} to the end of the line


executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out $(SOURCES) 
${PETSC_LIB}

   Send all the output from trying this.

Barry

Thanks, Barry. Still not work after including ${DLIB}






On Jun 1, 2018, at 12:07 PM, Danyang Su  wrote:

Hi All,

My code needs to link to an external C++ library (CGAL). The code is written in 
Fortran and I have already written interface to let Fortran call C++ function. 
For the sequential version without PETSc, it can be compiled without problem 
using the following makefile. The parallel version without CGAL can also be 
compiled successfully. However, when I tried to use PETSc together with CGAL 
library, I cannot compile the code. My questions is: How can I modify the 
makefile? Do I need to reconfigure PETSc with special flags? All the makefile 
samples are shown below.

#makefile for sequential version

FC = gfortran
#FC = ifort
CXX = g++ -std=c++11

DLIB = -lstdc++ -lm -L/usr/local/lib -rdynamic /usr/local/lib/libmpfr.so 
/usr/local/lib/libgmp.so /usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz 
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 /usr/local/lib/libCGAL.so.11.0.1 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so -lpthread 
-lGLU -lGL -lX11 -lXext -lz /usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so -lpthread 
-Wl,-rpath,/usr/local/lib

FFLAGS = -O3
CXXFLAGS = -O3

FPPFLAGS =  -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES)
$(FC) $(FFLAGS) $(FPPFLAGS) -o executable.out $(SOURCES) ${LIS_LIB} $(DLIB)
%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


#makefile for parallel version with PETSc, without CGAL

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out $(SOURCES) 
${PETSC_LIB}


#makefile for parallel version with PETSc, with CGAL, CANNOT work

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out $(SOURCES) 
${PETSC_LIB}

%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


Thanks,

Danyang





Re: [petsc-users] Makefile for mixed C++ and Fortran code

2018-06-01 Thread Danyang Su

Follow up:

With following command

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out 
$(SOURCES) ${PETSC_LIB}


%.o:%.F90
$(FLINKER) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CLINKER) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@

The compiler return error: no match function.

../../usg/cgal_triangulation_2d.cpp: In function ‘void 
outputTriangulation2d(in\

t, const char*, int, const char*)’:
../../usg/cgal_triangulation_2d.cpp:485:20: error: no matching function 
for cal\

l to ‘std::basic_ofstream::open(std::string&)’
out.open(strfile);

Thanks,


Danyang

On 18-06-01 10:07 AM, Danyang Su wrote:

Hi All,

My code needs to link to an external C++ library (CGAL). The code is 
written in Fortran and I have already written interface to let Fortran 
call C++ function. For the sequential version without PETSc, it can be 
compiled without problem using the following makefile. The parallel 
version without CGAL can also be compiled successfully. However, when 
I tried to use PETSc together with CGAL library, I cannot compile the 
code. My questions is: How can I modify the makefile? Do I need to 
reconfigure PETSc with special flags? All the makefile samples are 
shown below.


#makefile for sequential version

FC = gfortran
#FC = ifort
CXX = g++ -std=c++11

DLIB = -lstdc++ -lm -L/usr/local/lib -rdynamic 
/usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext 
-lz /usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext 
-lz /usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so 
-lpthread -Wl,-rpath,/usr/local/lib


FFLAGS = -O3
CXXFLAGS = -O3

FPPFLAGS =  -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES)
$(FC) $(FFLAGS) $(FPPFLAGS) -o executable.out $(SOURCES) 
${LIS_LIB} $(DLIB)

%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


#makefile for parallel version with PETSc, without CGAL

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out 
$(SOURCES) ${PETSC_LIB}



#makefile for parallel version with PETSc, with CGAL, CANNOT work

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out 
$(SOURCES) ${PETSC_LIB}


%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


Thanks,

Danyang





[petsc-users] Makefile for mixed C++ and Fortran code

2018-06-01 Thread Danyang Su

Hi All,

My code needs to link to an external C++ library (CGAL). The code is 
written in Fortran and I have already written interface to let Fortran 
call C++ function. For the sequential version without PETSc, it can be 
compiled without problem using the following makefile. The parallel 
version without CGAL can also be compiled successfully. However, when I 
tried to use PETSc together with CGAL library, I cannot compile the 
code. My questions is: How can I modify the makefile? Do I need to 
reconfigure PETSc with special flags? All the makefile samples are shown 
below.


#makefile for sequential version

FC = gfortran
#FC = ifort
CXX = g++ -std=c++11

DLIB = -lstdc++ -lm -L/usr/local/lib -rdynamic /usr/local/lib/libmpfr.so 
/usr/local/lib/libgmp.so /usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz 
/usr/local/lib/libCGAL_ImageIO.so.11.0.1 
/usr/local/lib/libCGAL.so.11.0.1 /usr/local/lib/libboost_thread.so 
/usr/local/lib/libboost_system.so -lpthread -lGLU -lGL -lX11 -lXext -lz 
/usr/local/lib/libmpfr.so /usr/local/lib/libgmp.so 
/usr/local/lib/libboost_thread.so /usr/local/lib/libboost_system.so 
-lpthread -Wl,-rpath,/usr/local/lib


FFLAGS = -O3
CXXFLAGS = -O3

FPPFLAGS =  -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES)
$(FC) $(FFLAGS) $(FPPFLAGS) -o executable.out $(SOURCES) ${LIS_LIB} 
$(DLIB)

%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


#makefile for parallel version with PETSc, without CGAL

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out 
$(SOURCES) ${PETSC_LIB}



#makefile for parallel version with PETSc, with CGAL, CANNOT work

#FC = ifort
#FC = gfortran

DLIB = -lm

FFLAGS = -O3

FPPFLAGS =  -DUSEPETSC -DUSECGAL

SRC =./../../

SOURCES = $(SRC)usg/math_common.o\
$(SRC)usg/geometry_definition.o\
$(SRC)usg/cgal_common.o\

...

executable: $(SOURCES) chkopts
-${FLINKER} $(FFLAGS) $(FPPFLAGS) $(CPPFLAGS)  -o executable.out 
$(SOURCES) ${PETSC_LIB}


%.o:%.F90
$(FC) $(FFLAGS) $(FPPFLAGS) -c -frounding-math $< -o $@
%.o:%.cpp
$(CXX) $(CXXFLAGS) $(CPPFLAGS) -c -frounding-math $< -o $@


Thanks,

Danyang



Re: [petsc-users] Segmentation Violation in getting DMPlex coordinates

2018-04-28 Thread Danyang Su

Hi Matt and Barry,

Thanks for your quick response. After changing DMDAVecGetArrayF90 to 
VecGetArrayF90, everything works now.


Thanks,

Danyang


On 18-04-28 01:38 PM, Smith, Barry F. wrote:

   Added runtime error checking for such incorrect calls in 
barry/dmda-calls-type-check



On Apr 28, 2018, at 9:19 AM, Matthew Knepley <knep...@gmail.com> wrote:

On Sat, Apr 28, 2018 at 2:08 AM, Danyang Su <danyang...@gmail.com> wrote:
Hi All,

I use DMPlex and need to get coordinates back after distribution. However, I 
always get segmentation violation in getting coords values in the following 
codes if using multiple processors. If only one processor is used, it works 
fine.

For each processors, the off value starts from 0 which looks good. I also tried 
0-based index, which gives the same error. Would any one help to check what is 
wrong here?

  idof   1 off   0
  idof   2 off   0
  idof   1 off   2
  idof   2 off   2
  idof   1 off   4
  idof   2 off   4
  idof   1 off   6
  idof   2 off   6
  idof   1 off   8
  idof   2 off   8


   DM :: distributedMesh, cda
   Vec :: gc
   PetscScalar, pointer :: coords(:)
   PetscSection ::  cs

   ...

   call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
   CHKERRQ(ierr)

   call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
   CHKERRQ(ierr)

   call DMGetDefaultSection(cda,cs,ierr)
   CHKERRQ(ierr)

   call PetscSectionGetChart(cs,istart,iend,ierr)
   CHKERRQ(ierr)

   !c get coordinates array
   call DMDAVecGetArrayF90(cda,gc,coords,ierr)

You cannot call DMDA function if you have a DMPlex. You jsut call 
VecGetArrayF90()

Matt
  
   CHKERRQ(ierr)


   do ipoint = istart, iend-1

 call PetscSectionGetDof(cs,ipoint,dof,ierr)
 CHKERRQ(ierr)

 call PetscSectionGetOffset(cs,ipoint,off,ierr)
 CHKERRQ(ierr)

 inode = ipoint-istart+1

 if (cell_coords == coords_xyz) then
   nodes(inode)%x = coords(off+1)
   nodes(inode)%y = coords(off+2)
   nodes(inode)%z = coords(off+3)
 else if (cell_coords == coords_xy) then
   nodes(inode)%x = coords(off+1)
   nodes(inode)%y = coords(off+2)
   nodes(inode)%z = 0.0d0
 else if (cell_coords == coords_yz) then
   nodes(inode)%x = 0.0d0
   nodes(inode)%y = coords(off+1)
   nodes(inode)%z = coords(off+2)
 else if (cell_coords ==coords_xz) then
   nodes(inode)%x = coords(off+1)
   nodes(inode)%y = 0.0d0
   nodes(inode)%z = coords(off+2)
 end if
   end do

   call DMDAVecRestoreArrayF90(cda,gc,coords,ierr)
   CHKERRQ(ierr)

Thanks,

Danyang





--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/




[petsc-users] Segmentation Violation in getting DMPlex coordinates

2018-04-28 Thread Danyang Su

Hi All,

I use DMPlex and need to get coordinates back after distribution. 
However, I always get segmentation violation in getting coords values in 
the following codes if using multiple processors. If only one processor 
is used, it works fine.


For each processors, the off value starts from 0 which looks good. I 
also tried 0-based index, which gives the same error. Would any one help 
to check what is wrong here?


 idof   1 off   0
 idof   2 off   0
 idof   1 off   2
 idof   2 off   2
 idof   1 off   4
 idof   2 off   4
 idof   1 off   6
 idof   2 off   6
 idof   1 off   8
 idof   2 off   8


  DM :: distributedMesh, cda
  Vec :: gc
  PetscScalar, pointer :: coords(:)
  PetscSection ::  cs

  ...

  call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
  CHKERRQ(ierr)

  call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
  CHKERRQ(ierr)

  call DMGetDefaultSection(cda,cs,ierr)
  CHKERRQ(ierr)

  call PetscSectionGetChart(cs,istart,iend,ierr)
  CHKERRQ(ierr)

  !c get coordinates array
  call DMDAVecGetArrayF90(cda,gc,coords,ierr)
  CHKERRQ(ierr)

  do ipoint = istart, iend-1

call PetscSectionGetDof(cs,ipoint,dof,ierr)
CHKERRQ(ierr)

call PetscSectionGetOffset(cs,ipoint,off,ierr)
CHKERRQ(ierr)

inode = ipoint-istart+1

if (cell_coords == coords_xyz) then
  nodes(inode)%x = coords(off+1)
  nodes(inode)%y = coords(off+2)
  nodes(inode)%z = coords(off+3)
else if (cell_coords == coords_xy) then
  nodes(inode)%x = coords(off+1)
  nodes(inode)%y = coords(off+2)
  nodes(inode)%z = 0.0d0
else if (cell_coords == coords_yz) then
  nodes(inode)%x = 0.0d0
  nodes(inode)%y = coords(off+1)
  nodes(inode)%z = coords(off+2)
else if (cell_coords ==coords_xz) then
  nodes(inode)%x = coords(off+1)
  nodes(inode)%y = 0.0d0
  nodes(inode)%z = coords(off+2)
end if
  end do

  call DMDAVecRestoreArrayF90(cda,gc,coords,ierr)
  CHKERRQ(ierr)

Thanks,

Danyang




Re: [petsc-users] Get vertex index of each cell in DMPlex after distribution

2018-04-27 Thread Danyang Su

On 2018-04-27 04:11 AM, Matthew Knepley wrote:
On Fri, Apr 27, 2018 at 2:09 AM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matt,

Sorry if this is a stupid question.

In the previous code for unstructured grid, I create labels to
mark the original node/cell index from VTK file and then
distribute it so that each subdomain has a copy of its original
node and cell index, as well as the PETSc numbering. Now I am
trying to get avoid of using large number of keys in
DMSetLabelValue since this costs lot of time for large problem.

I can get the coordinates of subdomain after distribution by using
DMGetCoordinatesLocal and DMGetCoordinateDM.

How can I get the vertex index of each cell after distribution?
Would you please give me a hint or functions that I can use.

You can permute the vectors back to the natural ordering using

http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html

which says you have to call DMPlexSetUseNaturalSF() before 
distributing the mesh. It is tested in


  src/dm/impls/plex/examples/tests/ex15.c

so you can see how its intended to work. It is very new and has not 
been tested by many people.


I can see how you might want this for small tests. Why would you want 
it for production models?

Hi Matt,

This is indeed what I need. As some of years old cases import initial 
conditions from external files, which are in natural ordering as the 
original mesh. Just want to make the code compatible to the old input files.


Thanks,

Danyang


  Thanks,

    Matt

Thanks,

Danyang


On 18-04-25 02:12 PM, Danyang Su wrote:

On 2018-04-25 09:47 AM, Matthew Knepley wrote:

On Wed, Apr 25, 2018 at 12:40 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi Matthew,

In the worst case, every node/cell may have different label.

Do not use Label for this. Its not an appropriate thing. If
every cell is different, just use the cell number.
Labels are for mapping a relatively small number of keys (like
material IDs) to sets of points (cells, vertices, etc.)
Its not a great data structure for a permutation.

Yes. If there is small number of keys, it runs very fast, even
for more than one million DMSetLabelValue calls. The performance
just deteriorates as the number of keys increases.

I cannot get avoid of DMSetLabelValue as node/cell index of
original mesh is needed for the previous input file that uses
some of global node/cell index to set value. But if I can get the
natural order of nodes/cells from DMPlex, I can discard the use
of DMSetLabelValue. Is there any function can do this job?

Thanks,

Danyang


However, I still do not believe these numbers. The old code does
a string comparison every time. I will setup a test.

   Matt

Below is one of the worst scenario with 102299 nodes and
102299 different labels for test. I found the time cost
increase during the loop. The first 9300 loop takes least
time (<0.5) while the last 9300 loops takes much more time
(>7.7), as shown below. If I use larger mesh with >1 million
nodes, it runs very very slowly in this part. The PETSc is
configured with optimization on.

Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --download-mpich --download-scalapack
--download-parmetis --download-metis --download-ptscotch
--download-fblaslapack --download-hypre
--download-superlu_dist --download-hdf5=yes
--download-ctetgen --with-debugging=0 COPTFLAGS="-O3
-march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native
-mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native"

istart  iendprogressCPU_Timetime cost - old (sec)
time cost - new (sec)
0   92990   1524670045.51166

930018599   0.100010753 1524670045.996050.4843890667
0.497246027
18600   27899   0.200010747 1524670047.326351.330302
1.3820912838
27900   37199   0.300010741 1524670049.3066 1.9802515507
2.2439446449
37200   46499   0.400010765 1524670052.1594 2.852804184
3.0739262104
46500   55799   0.500010729 1524670055.909613.7502081394
3.9270553589
55800   65099   0.600010753 1524670060.476544.5669286251
4.7571902275
65100   74399   0.700010777 1524670066.0941 5.6175630093
5.7428796291
74400   83699   0.800010741 1524670072.538866.44475317
6.5761549473
83700   92998   0.900010765 1524670079.990727.4518604279
7.4606924057
 

[petsc-users] Get vertex index of each cell in DMPlex after distribution

2018-04-27 Thread Danyang Su

Hi Matt,

Sorry if this is a stupid question.

In the previous code for unstructured grid, I create labels to mark the 
original node/cell index from VTK file and then distribute it so that 
each subdomain has a copy of its original node and cell index, as well 
as the PETSc numbering. Now I am trying to get avoid of using large 
number of keys in DMSetLabelValue since this costs lot of time for large 
problem.


I can get the coordinates of subdomain after distribution by using 
DMGetCoordinatesLocal and DMGetCoordinateDM.


How can I get the vertex index of each cell after distribution? Would 
you please give me a hint or functions that I can use.


Thanks,

Danyang


On 18-04-25 02:12 PM, Danyang Su wrote:

On 2018-04-25 09:47 AM, Matthew Knepley wrote:
On Wed, Apr 25, 2018 at 12:40 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matthew,

In the worst case, every node/cell may have different label.

Do not use Label for this. Its not an appropriate thing. If every 
cell is different, just use the cell number.
Labels are for mapping a relatively small number of keys (like 
material IDs) to sets of points (cells, vertices, etc.)

Its not a great data structure for a permutation.
Yes. If there is small number of keys, it runs very fast, even for 
more than one million DMSetLabelValue calls. The performance just 
deteriorates as the number of keys increases.


I cannot get avoid of DMSetLabelValue as node/cell index of original 
mesh is needed for the previous input file that uses some of global 
node/cell index to set value. But if I can get the natural order of 
nodes/cells from DMPlex, I can discard the use of DMSetLabelValue. Is 
there any function can do this job?


Thanks,

Danyang


However, I still do not believe these numbers. The old code does a 
string comparison every time. I will setup a test.


   Matt

Below is one of the worst scenario with 102299 nodes and 102299
different labels for test. I found the time cost increase during
the loop. The first 9300 loop takes least time (<0.5) while the
last 9300 loops takes much more time (>7.7), as shown below. If I
use larger mesh with >1 million nodes, it runs very very slowly
in this part. The PETSc is configured with optimization on.

Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
--download-mpich --download-scalapack --download-parmetis
--download-metis --download-ptscotch --download-fblaslapack
--download-hypre --download-superlu_dist --download-hdf5=yes
--download-ctetgen --with-debugging=0 COPTFLAGS="-O3
-march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native
-mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native"

istart  iendprogressCPU_Timetime cost - old (sec)   
time
cost - new (sec)
0   92990   1524670045.51166

930018599   0.100010753 1524670045.996050.4843890667
0.497246027
18600   27899   0.200010747 1524670047.326351.330302
1.3820912838
27900   37199   0.300010741 1524670049.3066 1.9802515507
2.2439446449
37200   46499   0.400010765 1524670052.1594 2.852804184
3.0739262104
46500   55799   0.500010729 1524670055.909613.7502081394
3.9270553589
55800   65099   0.600010753 1524670060.476544.5669286251
4.7571902275
65100   74399   0.700010777 1524670066.0941 5.6175630093
5.7428796291
74400   83699   0.800010741 1524670072.538866.44475317
6.5761549473
83700   92998   0.900010765 1524670079.990727.4518604279
7.4606924057
92999   102298  1   1524670087.710667.7199423313
8.2424075603



old code

do ipoint = 0, istart-1
  !c output time cost, use 1 processor to test
  if (b_enable_output .and. rank == 0) then
if (mod(ipoint,iprogress) == 0 .or. ipoint ==
istart-1) then
  !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
  write(*,*) ipoint,
(ipoint+1.0)/istart,"time",MPI_Wtime()
end if
  end if

  call DMSetLabelValue(dmda_flow%da,"cid_lg2g",ipoint, &
   ipoint+1,ierr)
  CHKERRQ(ierr)
end do


new code

call DMCreateLabel(dmda_flow%da,'cid_lg2g',ierr)
CHKERRQ(ierr)

call DMGetLabel(dmda_flow%da,'cid_lg2g',label, ierr)
CHKERRQ(ierr)

do ipoint = 0, istart-1
  !c output time cost, use 1 processor to test
  if (b_enable_output .and. rank == 0) then
if (mod(ipoint,iprogress) == 0 .or. ipoint ==
istart-1) then
  !writ

Re: [petsc-users] DMSetLabelValue takes a lot of time for large domain

2018-04-25 Thread Danyang Su

On 2018-04-25 09:47 AM, Matthew Knepley wrote:
On Wed, Apr 25, 2018 at 12:40 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matthew,

In the worst case, every node/cell may have different label.

Do not use Label for this. Its not an appropriate thing. If every cell 
is different, just use the cell number.
Labels are for mapping a relatively small number of keys (like 
material IDs) to sets of points (cells, vertices, etc.)

Its not a great data structure for a permutation.
Yes. If there is small number of keys, it runs very fast, even for more 
than one million DMSetLabelValue calls. The performance just 
deteriorates as the number of keys increases.


I cannot get avoid of DMSetLabelValue as node/cell index of original 
mesh is needed for the previous input file that uses some of global 
node/cell index to set value. But if I can get the natural order of 
nodes/cells from DMPlex, I can discard the use of DMSetLabelValue. Is 
there any function can do this job?


Thanks,

Danyang


However, I still do not believe these numbers. The old code does a 
string comparison every time. I will setup a test.


   Matt

Below is one of the worst scenario with 102299 nodes and 102299
different labels for test. I found the time cost increase during
the loop. The first 9300 loop takes least time (<0.5) while the
last 9300 loops takes much more time (>7.7), as shown below. If I
use larger mesh with >1 million nodes, it runs very very slowly in
this part. The PETSc is configured with optimization on.

Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
--download-mpich --download-scalapack --download-parmetis
--download-metis --download-ptscotch --download-fblaslapack
--download-hypre --download-superlu_dist --download-hdf5=yes
--download-ctetgen --with-debugging=0 COPTFLAGS="-O3 -march=native
-mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native"
FOPTFLAGS="-O3 -march=native -mtune=native"

istart  iendprogressCPU_Timetime cost - old (sec)   
time cost
- new (sec)
0   92990   1524670045.51166

930018599   0.100010753 1524670045.996050.4843890667
0.497246027
18600   27899   0.200010747 1524670047.326351.330302
1.3820912838
27900   37199   0.300010741 1524670049.3066 1.9802515507
2.2439446449
37200   46499   0.400010765 1524670052.1594 2.852804184 
3.0739262104
46500   55799   0.500010729 1524670055.909613.7502081394
3.9270553589
55800   65099   0.600010753 1524670060.476544.5669286251
4.7571902275
65100   74399   0.700010777 1524670066.0941 5.6175630093
5.7428796291
74400   83699   0.800010741 1524670072.538866.44475317  
6.5761549473
83700   92998   0.900010765 1524670079.990727.4518604279
7.4606924057
92999   102298  1   1524670087.710667.7199423313
8.2424075603



old code

    do ipoint = 0, istart-1
  !c output time cost, use 1 processor to test
  if (b_enable_output .and. rank == 0) then
    if (mod(ipoint,iprogress) == 0 .or. ipoint ==
istart-1) then
  !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
  write(*,*) ipoint,
(ipoint+1.0)/istart,"time",MPI_Wtime()
    end if
  end if

  call DMSetLabelValue(dmda_flow%da,"cid_lg2g",ipoint, &
   ipoint+1,ierr)
  CHKERRQ(ierr)
    end do


new code

    call DMCreateLabel(dmda_flow%da,'cid_lg2g',ierr)
    CHKERRQ(ierr)

    call DMGetLabel(dmda_flow%da,'cid_lg2g',label, ierr)
    CHKERRQ(ierr)

    do ipoint = 0, istart-1
  !c output time cost, use 1 processor to test
  if (b_enable_output .and. rank == 0) then
    if (mod(ipoint,iprogress) == 0 .or. ipoint ==
istart-1) then
  !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
  write(*,*) ipoint,
(ipoint+1.0)/istart,"time",MPI_Wtime()
    end if
  end if

  call DMLabelSetValue(label,ipoint,ipoint+1,ierr)
  CHKERRQ(ierr)
    end do

Thanks,

Danyang

On 2018-04-25 03:16 AM, Matthew Knepley wrote:

On Tue, Apr 24, 2018 at 11:57 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi All,

I use DMPlex in unstructured grid code and recently found
DMSetLabelValue takes a lot of time for large problem, e.g.,
num. of cells > 1 million. In my c

Re: [petsc-users] DMSetLabelValue takes a lot of time for large domain

2018-04-25 Thread Danyang Su

Hi Matthew,

In the worst case, every node/cell may have different label.

Below is one of the worst scenario with 102299 nodes and 102299 
different labels for test. I found the time cost increase during the 
loop. The first 9300 loop takes least time (<0.5) while the last 9300 
loops takes much more time (>7.7), as shown below. If I use larger mesh 
with >1 million nodes, it runs very very slowly in this part. The PETSc 
is configured with optimization on.


Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran 
--download-mpich --download-scalapack --download-parmetis 
--download-metis --download-ptscotch --download-fblaslapack 
--download-hypre --download-superlu_dist --download-hdf5=yes 
--download-ctetgen --with-debugging=0 COPTFLAGS="-O3 -march=native 
-mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" 
FOPTFLAGS="-O3 -march=native -mtune=native"


istart 	iend 	progress 	CPU_Time 	time cost - old (sec) 	time cost - new 
(sec)

0   92990   1524670045.51166

930018599   0.100010753 1524670045.996050.4843890667
0.497246027
18600   27899   0.200010747 1524670047.326351.330302
1.3820912838
27900   37199   0.300010741 1524670049.3066 1.9802515507
2.2439446449
37200   46499   0.400010765 1524670052.1594 2.852804184 
3.0739262104
46500   55799   0.500010729 1524670055.909613.7502081394
3.9270553589
55800   65099   0.600010753 1524670060.476544.5669286251
4.7571902275
65100   74399   0.700010777 1524670066.0941 5.6175630093
5.7428796291
74400   83699   0.800010741 1524670072.538866.44475317  
6.5761549473
83700   92998   0.900010765 1524670079.990727.4518604279
7.4606924057
92999   102298  1   1524670087.710667.71994233138.2424075603



old code

    do ipoint = 0, istart-1
  !c output time cost, use 1 processor to test
  if (b_enable_output .and. rank == 0) then
    if (mod(ipoint,iprogress) == 0 .or. ipoint == istart-1) then
  !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
  write(*,*) ipoint, (ipoint+1.0)/istart,"time",MPI_Wtime()
    end if
  end if

  call DMSetLabelValue(dmda_flow%da,"cid_lg2g",ipoint, &
   ipoint+1,ierr)
  CHKERRQ(ierr)
    end do


new code

    call DMCreateLabel(dmda_flow%da,'cid_lg2g',ierr)
    CHKERRQ(ierr)

    call DMGetLabel(dmda_flow%da,'cid_lg2g',label, ierr)
    CHKERRQ(ierr)

    do ipoint = 0, istart-1
  !c output time cost, use 1 processor to test
  if (b_enable_output .and. rank == 0) then
    if (mod(ipoint,iprogress) == 0 .or. ipoint == istart-1) then
  !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
  write(*,*) ipoint, (ipoint+1.0)/istart,"time",MPI_Wtime()
    end if
  end if

  call DMLabelSetValue(label,ipoint,ipoint+1,ierr)
  CHKERRQ(ierr)
    end do

Thanks,

Danyang

On 2018-04-25 03:16 AM, Matthew Knepley wrote:
On Tue, Apr 24, 2018 at 11:57 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi All,

I use DMPlex in unstructured grid code and recently found
DMSetLabelValue takes a lot of time for large problem, e.g., num.
of cells > 1 million. In my code, I use


I read your code wrong. For large loop, you should not use the 
convenience function. You should use


DMPlexCreateFromCellList ()


DMGetLabel(dm, name, )


Loop over all cells/nodes{

DMSetLabelValue


Replace this by DMLabelSetValue(label, point, val)

}

DMPlexDistribute

The code works fine except DMSetLabelValue takes a lot of time for
large problem. I use DMSetLabelValue to set material id for all
the nodes or cells so that each subdomain has a copy of material
id. Is there any other functions that can be used more efficient,
e.g. set labels by array, not 1 by 1?


That should take much less time.

  Thanks,

     Matt

Thanks,

Danyang




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




[petsc-users] DMSetLabelValue takes a lot of time for large domain

2018-04-24 Thread Danyang Su

Hi All,

I use DMPlex in unstructured grid code and recently found 
DMSetLabelValue takes a lot of time for large problem, e.g., num. of 
cells > 1 million. In my code, I use


DMPlexCreateFromCellList ()

Loop over all cells/nodes{

DMSetLabelValue

}

DMPlexDistribute

The code works fine except DMSetLabelValue takes a lot of time for large 
problem. I use DMSetLabelValue to set material id for all the nodes or 
cells so that each subdomain has a copy of material id. Is there any 
other functions that can be used more efficient, e.g. set labels by 
array, not 1 by 1?


Thanks,

Danyang



Re: [petsc-users] [petsc-maint] how to check if cell is local owned in DMPlex

2018-03-08 Thread Danyang Su

On 18-03-07 03:54 PM, Jed Brown wrote:

Danyang Su <danyang...@gmail.com> writes:


Based on my test, this function works fine using current PETSc-dev
version, but I cannot get it compiled correctly using other versions for
Fortran code, as mentioned in the previous emails. I asked this question
because some of the clusters we use do not have the PETSc-dev version
and it takes time get staff reinstall another version.

You can install PETSc yourself as a normal user.

Thanks, Jed. This is the way I will follow.

Danyang



Re: [petsc-users] [petsc-maint] how to check if cell is local owned in DMPlex

2018-03-07 Thread Danyang Su

Hi All,

Thanks again for all your help during my code development and it turns 
out the DMPlex for unstructured grid works pretty well. I have another 
question.


 Is there any alternative method to get number of leaves without using 
PetscSFGetGraph?


Based on my test, this function works fine using current PETSc-dev 
version, but I cannot get it compiled correctly using other versions for 
Fortran code, as mentioned in the previous emails. I asked this question 
because some of the clusters we use do not have the PETSc-dev version 
and it takes time get staff reinstall another version.


Thanks,

Danyang

On 18-03-05 11:50 AM, Smith, Barry F. wrote:

MatSolverPackage

became MatSolverType




On Mar 5, 2018, at 1:35 PM, Danyang Su <danyang...@gmail.com> wrote:

Hi Barry and Matt,

The compiling problem should be caused by the PETSc version installed on my 
computer. When updated to PETSc-Dev version, the ex1f example works fine. 
However, I cannot compile this example under PETSc-3.8.3 version.

After updating to PETSc-Dev verison, I encounter another compiling problem in 
my code.

 MatSolverPackage :: solver_pkg_flow
 1
Error: Unclassifiable statement at (1)

Including petscmat.h or petscpc.h does not help to solve this problem. I can 
rewrite this part to get rid of this. But I would rather to keep this if there 
is alternative way to go. What is the head file should I include in order to 
use MatSolverPackage?

Thanks,

Danyang


On 18-03-04 11:15 AM, Smith, Barry F. wrote:

   See src/vec/is/sf/examples/tutorials/ex1f.F90 in the master branch of the 
PETSc git repository

   BTW:

git grep -i petscsfgetgraph

will show every use of the function in the source code. Very useful tool

Barry



On Mar 4, 2018, at 1:05 PM, Danyang Su <danyang...@gmail.com> wrote:



On 18-03-04 08:08 AM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:22 PM, Danyang Su <danyang...@gmail.com> wrote:
Hi Matt,
I use the latest Fortran style in PETSc 3.8. Enclosed are the PETSc 
configuration, code compiling log and the function that causes compiling error. 
The compiling error happens after I include petscsf.h in the following section. 
I didn't find petscsf.h in petsc/finclude/ folder so I use the head file in the 
'include' folder and this seems not allowed.

I apologize for taking so long. The PetscSF definitions are in

#include 

Hi Matt,

After including
#include 
   use petscis

I still get error saying undefined reference to `petscsfgetgraph_'

Did I miss any other head file?

Thanks,

Danyang

You are correct that they should be moved out.

   Thanks,

  Matt

#ifdef PETSC_V3_8_X

#include 
#include 
#include 
#include 
   use petscsys
   use petscdmplex
   use petscsf

#endif

Thanks,

Danyang


On 18-03-02 12:08 PM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:00 PM, Danyang Su <danyang...@gmail.com> wrote:
On 18-03-02 10:58 AM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su <danyang...@gmail.com> wrote:

On 18-02-19 03:30 PM, Matthew Knepley wrote:

On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su <danyang...@gmail.com> wrote:
Hi Matt,

Would you please let me know how to check if a cell is local owned? When overlap 
is 0 in DMPlexDistribute, all the cells are local owned. How about overlap > 0? 
It sounds like impossible to check by node because a cell can be local owned even 
if none of the nodes in this cell is local owned.

If a cell is in the PetscSF, then it is not locally owned. The local nodes in 
the SF are sorted, so I use
PetscFindInt 
(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html).

Hi Matt,

Would you please give me a little more about how to mark the ghost cells when 
overlap > 0? What do you mean a cell is in the PetscSF? I use PetscSFView to 
export the graph (original mesh file pile.vtk) and it exports all the cells, 
including the ghost cells (PETScSFView.txt).

Yes, I will send you some sample code when I get time. The first problem is 
that you are looking at a different PetscSF. This looks like the
one returned by DMPlexDistribute(). This is mapping the serial mesh to the 
parallel mesh. You want

   
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

Then you can look at

   
https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

I get the pointSF, get out the list of leaves, and find points in it using 
PetscFindInt()

Hi Matt,
By using the local dm, I can get the PetscSF I want, as shown below. Now I need to get the number 
of ghost cells or local cells (here 4944) or number of leaves (here 825) for each processor. I try 
to use PetscSFGetGraph to get number of leaves in Fortran. After including "petscsf.h", I 
got compilation error saying "You need a ISO C conforming compiler to use the glibc

Re: [petsc-users] [petsc-maint] how to check if cell is local owned in DMPlex

2018-03-05 Thread Danyang Su

Hi Barry and Matt,

The compiling problem should be caused by the PETSc version installed on 
my computer. When updated to PETSc-Dev version, the ex1f example works 
fine. However, I cannot compile this example under PETSc-3.8.3 version.


After updating to PETSc-Dev verison, I encounter another compiling 
problem in my code.


    MatSolverPackage :: solver_pkg_flow
    1
Error: Unclassifiable statement at (1)

Including petscmat.h or petscpc.h does not help to solve this problem. I 
can rewrite this part to get rid of this. But I would rather to keep 
this if there is alternative way to go. What is the head file should I 
include in order to use MatSolverPackage?


Thanks,

Danyang


On 18-03-04 11:15 AM, Smith, Barry F. wrote:

   See src/vec/is/sf/examples/tutorials/ex1f.F90 in the master branch of the 
PETSc git repository

   BTW:

git grep -i petscsfgetgraph

will show every use of the function in the source code. Very useful tool

Barry



On Mar 4, 2018, at 1:05 PM, Danyang Su <danyang...@gmail.com> wrote:



On 18-03-04 08:08 AM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:22 PM, Danyang Su <danyang...@gmail.com> wrote:
Hi Matt,
I use the latest Fortran style in PETSc 3.8. Enclosed are the PETSc 
configuration, code compiling log and the function that causes compiling error. 
The compiling error happens after I include petscsf.h in the following section. 
I didn't find petscsf.h in petsc/finclude/ folder so I use the head file in the 
'include' folder and this seems not allowed.

I apologize for taking so long. The PetscSF definitions are in

#include 

Hi Matt,

After including
#include 
   use petscis

I still get error saying undefined reference to `petscsfgetgraph_'

Did I miss any other head file?

Thanks,

Danyang

You are correct that they should be moved out.

   Thanks,

  Matt

#ifdef PETSC_V3_8_X

#include 
#include 
#include 
#include 
   use petscsys
   use petscdmplex
   use petscsf

#endif

Thanks,

Danyang


On 18-03-02 12:08 PM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:00 PM, Danyang Su <danyang...@gmail.com> wrote:
On 18-03-02 10:58 AM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su <danyang...@gmail.com> wrote:

On 18-02-19 03:30 PM, Matthew Knepley wrote:

On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su <danyang...@gmail.com> wrote:
Hi Matt,

Would you please let me know how to check if a cell is local owned? When overlap 
is 0 in DMPlexDistribute, all the cells are local owned. How about overlap > 0? 
It sounds like impossible to check by node because a cell can be local owned even 
if none of the nodes in this cell is local owned.

If a cell is in the PetscSF, then it is not locally owned. The local nodes in 
the SF are sorted, so I use
PetscFindInt 
(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html).

Hi Matt,

Would you please give me a little more about how to mark the ghost cells when 
overlap > 0? What do you mean a cell is in the PetscSF? I use PetscSFView to 
export the graph (original mesh file pile.vtk) and it exports all the cells, 
including the ghost cells (PETScSFView.txt).

Yes, I will send you some sample code when I get time. The first problem is 
that you are looking at a different PetscSF. This looks like the
one returned by DMPlexDistribute(). This is mapping the serial mesh to the 
parallel mesh. You want

   
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

Then you can look at

   
https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

I get the pointSF, get out the list of leaves, and find points in it using 
PetscFindInt()

Hi Matt,
By using the local dm, I can get the PetscSF I want, as shown below. Now I need to get the number 
of ghost cells or local cells (here 4944) or number of leaves (here 825) for each processor. I try 
to use PetscSFGetGraph to get number of leaves in Fortran. After including "petscsf.h", I 
got compilation error saying "You need a ISO C conforming compiler to use the glibc 
headers". Is there any alternative way to do this? I do not need the ghost-neighbor mapping, 
but just the number of local owned cells.

Also, make sure you are using the latest Fortran style for PETSc:

   
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html
  
   [0] Number of roots=11449, leaves=825, remote ranks=1

   [0] 4944 <- (1,0)
   [0] 4945 <- (1,28)
   [0] 4946 <- (1,56)
...
   [1] Number of roots=11695, leaves=538, remote ranks=1
   [1] 5056 <- (0,21)
   [1] 5057 <- (0,43)
   [1] 5058 <- (0,65)
   [1] 5059 <- (0,87)

In file included from /usr/include/features.h:375:0,
  from /usr/include/stdio.h:28,
  from /home/dsu/Soft/PETSc/petsc-3.8.3/include/petscsys.h:175

Re: [petsc-users] how to check if cell is local owned in DMPlex

2018-03-04 Thread Danyang Su



On 18-03-04 08:08 AM, Matthew Knepley wrote:
On Fri, Mar 2, 2018 at 3:22 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matt,

I use the latest Fortran style in PETSc 3.8. Enclosed are the
PETSc configuration, code compiling log and the function that
causes compiling error. The compiling error happens after I
include petscsf.h in the following section. I didn't find
petscsf.h in petsc/finclude/ folder so I use the head file in the
'include' folder and this seems not allowed.


I apologize for taking so long. The PetscSF definitions are in

#include 

Hi Matt,

After including
#include 
  use petscis

I still get error saying undefined reference to `petscsfgetgraph_'

Did I miss any other head file?

Thanks,

Danyang


You are correct that they should be moved out.

  Thanks,

 Matt

#ifdef PETSC_V3_8_X

#include 
#include 
#include 
#include 
  use petscsys
  use petscdmplex
  use petscsf

#endif

Thanks,

Danyang


On 18-03-02 12:08 PM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:00 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:

On 18-03-02 10:58 AM, Matthew Knepley wrote:


On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:


On 18-02-19 03:30 PM, Matthew Knepley wrote:

    On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi Matt,

Would you please let me know how to check if a cell
is local owned? When overlap is 0 in
DMPlexDistribute, all the cells are local owned.
How about overlap > 0? It sounds like impossible to
check by node because a cell can be local owned
even if none of the nodes in this cell is local owned.


If a cell is in the PetscSF, then it is not locally
owned. The local nodes in the SF are sorted, so I use
PetscFindInt

(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html>).

Hi Matt,

Would you please give me a little more about how to mark
the ghost cells when overlap > 0? What do you mean a
cell is in the PetscSF? I use PetscSFView to export the
graph (original mesh file pile.vtk) and it exports all
the cells, including the ghost cells (PETScSFView.txt).


Yes, I will send you some sample code when I get time. The
first problem is that you are looking at a different
PetscSF. This looks like the
one returned by DMPlexDistribute(). This is mapping the
serial mesh to the parallel mesh. You want


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html>

Then you can look at


https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

<https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683>

I get the pointSF, get out the list of leaves, and find
points in it using PetscFindInt()

Hi Matt,
By using the local dm, I can get the PetscSF I want, as shown
below. Now I need to get the number of ghost cells or local
cells (here 4944) or number of leaves (here 825) for each
processor. I try to use PetscSFGetGraph to get number of
leaves in Fortran. After including "petscsf.h", I got
compilation error saying "You need a ISO C conforming
compiler to use the glibc headers". Is there any alternative
way to do this? I do not need the ghost-neighbor mapping, but
just the number of local owned cells.


Also, make sure you are using the latest Fortran style for PETSc:


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html>

  [0] Number of roots=11449, leaves=825, remote ranks=1
  [0] 4944 <- (1,0)
  [0] 4945 <- (1,28)
  [0] 4946 <- (1,56)
...
  [1] Number of roots=11695, leaves=538, remote ranks=1
  [1] 5056 <- (0,21)
  [1] 5057 <- (0,43)
  [1] 5058 <- (0,65)
  [1] 5059 <- (0,87)

In file included from /usr/inclu

Re: [petsc-users] how to check if cell is local owned in DMPlex

2018-03-02 Thread Danyang Su

Hi Matt,

I use the latest Fortran style in PETSc 3.8. Enclosed are the PETSc 
configuration, code compiling log and the function that causes compiling 
error. The compiling error happens after I include petscsf.h in the 
following section. I didn't find petscsf.h in petsc/finclude/ folder so 
I use the head file in the 'include' folder and this seems not allowed.


#ifdef PETSC_V3_8_X

#include 
#include 
#include 
#include 
  use petscsys
  use petscdmplex
  use petscsf

#endif

Thanks,

Danyang


On 18-03-02 12:08 PM, Matthew Knepley wrote:
On Fri, Mar 2, 2018 at 3:00 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


On 18-03-02 10:58 AM, Matthew Knepley wrote:


On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:


On 18-02-19 03:30 PM, Matthew Knepley wrote:

On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi Matt,

Would you please let me know how to check if a cell is
local owned? When overlap is 0 in DMPlexDistribute, all
the cells are local owned. How about overlap > 0? It
sounds like impossible to check by node because a cell
can be local owned even if none of the nodes in this
cell is local owned.


If a cell is in the PetscSF, then it is not locally owned.
The local nodes in the SF are sorted, so I use
PetscFindInt

(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html>).

Hi Matt,

Would you please give me a little more about how to mark the
ghost cells when overlap > 0? What do you mean a cell is in
the PetscSF? I use PetscSFView to export the graph (original
mesh file pile.vtk) and it exports all the cells, including
the ghost cells (PETScSFView.txt).


Yes, I will send you some sample code when I get time. The first
problem is that you are looking at a different PetscSF. This
looks like the
one returned by DMPlexDistribute(). This is mapping the serial
mesh to the parallel mesh. You want


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html>

Then you can look at


https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

<https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683>

I get the pointSF, get out the list of leaves, and find points in
it using PetscFindInt()

Hi Matt,
By using the local dm, I can get the PetscSF I want, as shown
below. Now I need to get the number of ghost cells or local cells
(here 4944) or number of leaves (here 825) for each processor. I
try to use PetscSFGetGraph to get number of leaves in Fortran.
After including "petscsf.h", I got compilation error saying "You
need a ISO C conforming compiler to use the glibc headers". Is
there any alternative way to do this? I do not need the
ghost-neighbor mapping, but just the number of local owned cells.


Also, make sure you are using the latest Fortran style for PETSc:

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html

  [0] Number of roots=11449, leaves=825, remote ranks=1
  [0] 4944 <- (1,0)
  [0] 4945 <- (1,28)
  [0] 4946 <- (1,56)
...
  [1] Number of roots=11695, leaves=538, remote ranks=1
  [1] 5056 <- (0,21)
  [1] 5057 <- (0,43)
  [1] 5058 <- (0,65)
  [1] 5059 <- (0,87)

In file included from /usr/include/features.h:375:0,
 from /usr/include/stdio.h:28,
 from
/home/dsu/Soft/PETSc/petsc-3.8.3/include/petscsys.h:175,
 from
/home/dsu/Soft/PETSc/petsc-3.8.3/include/petscsf.h:7,
 from ../../solver/solver_ddmethod.F90:4837:
/usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You
need a ISO C conforming compile\
r to use the glibc headers"
 # error "You need a ISO C conforming compiler to use the glibc
headers"


Can you send this to petsc-ma...@mcs.anl.gov 
<mailto:petsc-ma...@mcs.anl.gov>? It looks like a build problem that 
can be fixed.


  Thanks,

    Matt

Thanks,

Danyang


  Thanks,

    Matt

Thanks,

Danyang


  Thanks,

    Matt

Thanks,

Danyang

-- 
  

Re: [petsc-users] how to check if cell is local owned in DMPlex

2018-03-02 Thread Danyang Su



On 18-03-02 10:58 AM, Matthew Knepley wrote:
On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:



On 18-02-19 03:30 PM, Matthew Knepley wrote:

On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:

Hi Matt,

Would you please let me know how to check if a cell is local
owned? When overlap is 0 in DMPlexDistribute, all the cells
are local owned. How about overlap > 0? It sounds like
impossible to check by node because a cell can be local owned
even if none of the nodes in this cell is local owned.


If a cell is in the PetscSF, then it is not locally owned. The
local nodes in the SF are sorted, so I use
PetscFindInt

(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html>).

Hi Matt,

Would you please give me a little more about how to mark the ghost
cells when overlap > 0? What do you mean a cell is in the PetscSF?
I use PetscSFView to export the graph (original mesh file
pile.vtk) and it exports all the cells, including the ghost cells
(PETScSFView.txt).


Yes, I will send you some sample code when I get time. The first 
problem is that you are looking at a different PetscSF. This looks 
like the
one returned by DMPlexDistribute(). This is mapping the serial mesh to 
the parallel mesh. You want


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

Then you can look at

https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

I get the pointSF, get out the list of leaves, and find points in it 
using PetscFindInt()

Hi Matt,
By using the local dm, I can get the PetscSF I want, as shown below. Now 
I need to get the number of ghost cells or local cells (here 4944) or 
number of leaves (here 825) for each processor. I try to use 
PetscSFGetGraph to get number of leaves in Fortran. After including 
"petscsf.h", I got compilation error saying "You need a ISO C conforming 
compiler to use the glibc headers". Is there any alternative way to do 
this? I do not need the ghost-neighbor mapping, but just the number of 
local owned cells.


  [0] Number of roots=11449, leaves=825, remote ranks=1
  [0] 4944 <- (1,0)
  [0] 4945 <- (1,28)
  [0] 4946 <- (1,56)
...
  [1] Number of roots=11695, leaves=538, remote ranks=1
  [1] 5056 <- (0,21)
  [1] 5057 <- (0,43)
  [1] 5058 <- (0,65)
  [1] 5059 <- (0,87)

In file included from /usr/include/features.h:375:0,
 from /usr/include/stdio.h:28,
 from 
/home/dsu/Soft/PETSc/petsc-3.8.3/include/petscsys.h:175,

 from /home/dsu/Soft/PETSc/petsc-3.8.3/include/petscsf.h:7,
 from ../../solver/solver_ddmethod.F90:4837:
/usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You need 
a ISO C conforming compile\

r to use the glibc headers"
 # error "You need a ISO C conforming compiler to use the glibc headers"

Thanks,

Danyang


  Thanks,

    Matt

Thanks,

Danyang


  Thanks,

    Matt

Thanks,

Danyang

-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/
<http://www.caam.rice.edu/%7Emk51/>





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




Re: [petsc-users] how to check if cell is local owned in DMPlex

2018-03-02 Thread Danyang Su

On 18-03-02 10:58 AM, Matthew Knepley wrote:
On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:



On 18-02-19 03:30 PM, Matthew Knepley wrote:

On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:

Hi Matt,

Would you please let me know how to check if a cell is local
owned? When overlap is 0 in DMPlexDistribute, all the cells
are local owned. How about overlap > 0? It sounds like
impossible to check by node because a cell can be local owned
even if none of the nodes in this cell is local owned.


If a cell is in the PetscSF, then it is not locally owned. The
local nodes in the SF are sorted, so I use
PetscFindInt

(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html

<http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html>).

Hi Matt,

Would you please give me a little more about how to mark the ghost
cells when overlap > 0? What do you mean a cell is in the PetscSF?
I use PetscSFView to export the graph (original mesh file
pile.vtk) and it exports all the cells, including the ghost cells
(PETScSFView.txt).


Yes, I will send you some sample code when I get time. The first 
problem is that you are looking at a different PetscSF. This looks 
like the
one returned by DMPlexDistribute(). This is mapping the serial mesh to 
the parallel mesh. You want


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

Then you can look at

https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

I get the pointSF, get out the list of leaves, and find points in it 
using PetscFindInt()
Thanks Matt. I will try to figure it out based on your provided link and 
will let you know if I get it work.


Danyang


  Thanks,

    Matt

Thanks,

Danyang


  Thanks,

    Matt

Thanks,

Danyang

-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/
<http://www.caam.rice.edu/%7Emk51/>





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




Re: [petsc-users] object name overwritten in VecView

2018-02-28 Thread Danyang Su

Hi Barry and Matt,

Thanks for your quick response. Considering the output performance, as 
well as the long-term plan of PETSc development, which format would you 
suggest? I personally prefer the data format that can be post-processed 
by Paraview as our sequential code (written without PETSc) is also using 
Paraview compatible data format. XDMF sounds promising as suggested by Matt.


Thanks,

Danyang

On 18-02-28 08:17 AM, Smith, Barry F. wrote:


  It turns out the fix is really easy. Here is a patch.

  Apply it with

    patch -p1 < barry-vtk.patch

  then do

    make gnumake

  all in $PETSC_DIR



> On Feb 28, 2018, at 9:07 AM, Danyang Su <danyang...@gmail.com> wrote:
>
> Hi Matt,
>
> Thanks for your suggestion and I will use xmf instead.
>
> Regards,
>
> Danyang
>
> On February 28, 2018 3:58:08 AM PST, Matthew Knepley 
<knep...@gmail.com> wrote:
> On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. 
<bsm...@mcs.anl.gov> wrote:

>
>   Matt,
>
>   I have confirmed this is reproducible and a bug. The problem 
arises because

>
> frame #0: 0x00010140625a 
libpetsc.3.8.dylib`PetscViewerVTKAddField_VTK(viewer=0x7fe66760c750, 
dm=0x7fe668810820, 
PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at 
plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, 
vec=0x7fe66880ee20) at vtkv.c:140
> frame #1: 0x000101404e6e 
libpetsc.3.8.dylib`PetscViewerVTKAddField(viewer=0x7fe66760c750, 
dm=0x7fe668810820, 
PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at 
plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, 
vec=0x7fe66880ee20) at vtkv.c:46
> frame #2: 0x000101e0b7c3 
libpetsc.3.8.dylib`VecView_Plex_Local(v=0x7fe66880ee20, 
viewer=0x7fe66760c750) at plex.c:301
> frame #3: 0x000101e0ead7 
libpetsc.3.8.dylib`VecView_Plex(v=0x7fe66880e820, 
viewer=0x7fe66760c750) at plex.c:348

>
> keeps a linked list of vectors that are to be viewed and the vectors 
are the same Vec because they are obtained with DMGetLocalVector().

>
> The safest fix is to have PetscViewerVTKAddField_VTK() do a 
VecDuplicate() on the vector passed in and store that in the linked 
list instead of just storing a pointer to the passed in vector (which 
might and can be overwritten before all the linked vectors are 
actually stored).

>
> Danyang,
>
> Barry is right, and the bug can be fixed the way he says. However, 
this points out why VTK is bad format. I think a better choice is

> to use HDF5 and XDMF. For example, in my code now I always use
>
>   DMVIewFromOptions(dm, NULL, "-dm_view");
>
> and then later (perhaps several times)
>
>   VecViewFromOptions(u, NULL, "-u_vec_view")
>   VecViewFromOptions(v, NULL, "-v_vec_view")
>
> and then on the command line
>
>   -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view 
hdf5:test.h5::append

>
> which produces a file
>
>   test.h5
>
> Then I run
>
>   $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5
>
> which produces another file
>
>   test.xmf
>
> This can be loaded by Paraview for visualization.
>
>   Thanks,
>
>  Matt
>
>
>
>   Barry
>
>
>
> > On Feb 27, 2018, at 10:44 PM, Danyang Su <danyang...@gmail.com> wrote:
> >
> > Hi All,
> >
> > How to set different object names when using multiple VecView? I 
try to use PetscObjectSetName with multiple output, but the object 
name is overwritten by the last one.

> >
> > As shown below, as well as the enclosed files as example, the 
vector name in sol.vtk is vec_v for both vector u and v.

> >
> >  call PetscViewerCreate(PETSC_COMM_WORLD, viewer, 
ierr);CHKERRA(ierr)
> >  call PetscViewerSetType(viewer, PETSCVIEWERVTK, 
ierr);CHKERRA(ierr)
> >  call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, 
ierr);CHKERRA(ierr)
> >  call PetscViewerFileSetName(viewer, 'sol.vtk', 
ierr);CHKERRA(ierr)

> >
> >  call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr)
> >  call VecView(u, viewer, ierr);CHKERRA(ierr)
> >
> >  call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr)
> >  call VecView(v, viewer, ierr);CHKERRA(ierr)
> >
> >  call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr)
> >
> >  call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr)
> >  call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr)
> >
> > Thanks,
> >
> > Danyang
> >
> > 
>
>
>
>
> --
> What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ 
<https://www.cse.buffalo.edu/%7Eknepley/>

>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.





Re: [petsc-users] object name overwritten in VecView

2018-02-28 Thread Danyang Su
Hi Matt,

Thanks for your suggestion and I will use xmf instead. 

Regards,

Danyang

On February 28, 2018 3:58:08 AM PST, Matthew Knepley <knep...@gmail.com> wrote:
>On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. <bsm...@mcs.anl.gov>
>wrote:
>
>>
>>   Matt,
>>
>>   I have confirmed this is reproducible and a bug. The problem arises
>> because
>>
>> frame #0: 0x00010140625a libpetsc.3.8.dylib`
>> PetscViewerVTKAddField_VTK(viewer=0x7fe66760c750,
>> dm=0x7fe668810820,
>PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll
>> at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD,
>> vec=0x7fe66880ee20) at vtkv.c:140
>> frame #1: 0x000101404e6e libpetsc.3.8.dylib`
>> PetscViewerVTKAddField(viewer=0x7fe66760c750,
>dm=0x7fe668810820,
>> PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at
>> plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD,
>vec=0x7fe66880ee20)
>> at vtkv.c:46
>> frame #2: 0x000101e0b7c3
>libpetsc.3.8.dylib`VecView_Plex_Local(v=0x7fe66880ee20,
>> viewer=0x7fe66760c750) at plex.c:301
>> frame #3: 0x000101e0ead7
>libpetsc.3.8.dylib`VecView_Plex(v=0x7fe66880e820,
>> viewer=0x7fe66760c750) at plex.c:348
>>
>> keeps a linked list of vectors that are to be viewed and the vectors
>are
>> the same Vec because they are obtained with DMGetLocalVector().
>>
>> The safest fix is to have PetscViewerVTKAddField_VTK() do a
>VecDuplicate()
>> on the vector passed in and store that in the linked list instead of
>just
>> storing a pointer to the passed in vector (which might and can be
>> overwritten before all the linked vectors are actually stored).
>>
>
>Danyang,
>
>Barry is right, and the bug can be fixed the way he says. However, this
>points out why VTK is bad format. I think a better choice is
>to use HDF5 and XDMF. For example, in my code now I always use
>
>  DMVIewFromOptions(dm, NULL, "-dm_view");
>
>and then later (perhaps several times)
>
>  VecViewFromOptions(u, NULL, "-u_vec_view")
>  VecViewFromOptions(v, NULL, "-v_vec_view")
>
>and then on the command line
>
>  -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view
>hdf5:test.h5::append
>
>which produces a file
>
>  test.h5
>
>Then I run
>
>  $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5
>
>which produces another file
>
>  test.xmf
>
>This can be loaded by Paraview for visualization.
>
>  Thanks,
>
> Matt
>
>
>
>>
>>   Barry
>>
>>
>>
>> > On Feb 27, 2018, at 10:44 PM, Danyang Su <danyang...@gmail.com>
>wrote:
>> >
>> > Hi All,
>> >
>> > How to set different object names when using multiple VecView? I
>try to
>> use PetscObjectSetName with multiple output, but the object name is
>> overwritten by the last one.
>> >
>> > As shown below, as well as the enclosed files as example, the
>vector
>> name in sol.vtk is vec_v for both vector u and v.
>> >
>> >  call PetscViewerCreate(PETSC_COMM_WORLD, viewer,
>> ierr);CHKERRA(ierr)
>> >  call PetscViewerSetType(viewer, PETSCVIEWERVTK,
>ierr);CHKERRA(ierr)
>> >  call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK,
>> ierr);CHKERRA(ierr)
>> >  call PetscViewerFileSetName(viewer, 'sol.vtk',
>ierr);CHKERRA(ierr)
>> >
>> >  call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr)
>> >  call VecView(u, viewer, ierr);CHKERRA(ierr)
>> >
>> >  call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr)
>> >  call VecView(v, viewer, ierr);CHKERRA(ierr)
>> >
>> >  call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr)
>> >
>> >  call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr)
>> >  call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr)
>> >
>> > Thanks,
>> >
>> > Danyang
>> >
>> > 
>>
>>
>
>
>-- 
>What most experimenters take for granted before they begin their
>experiments is infinitely more interesting than any results to which
>their
>experiments lead.
>-- Norbert Wiener
>
>https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[petsc-users] object name overwritten in VecView

2018-02-27 Thread Danyang Su

Hi All,

How to set different object names when using multiple VecView? I try to 
use PetscObjectSetName with multiple output, but the object name is 
overwritten by the last one.


As shown below, as well as the enclosed files as example, the vector 
name in sol.vtk is vec_v for both vector u and v.


  call PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr);CHKERRA(ierr)
  call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr);CHKERRA(ierr)
  call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, 
ierr);CHKERRA(ierr)

  call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr)

  call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr)
  call VecView(u, viewer, ierr);CHKERRA(ierr)

  call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr)
  call VecView(v, viewer, ierr);CHKERRA(ierr)

  call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr)

  call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr)
  call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr)

Thanks,

Danyang

  program DMPlexTestField
#include "petsc/finclude/petscdmplex.h"
#include "petsc/finclude/petscdmlabel.h"
  use petscdmplex
  implicit none

  DM :: dm
  DMLabel :: label
  Vec :: u, v
  PetscViewer :: viewer
  PetscSection :: section
  PetscInt :: dim,numCells,numFields,numBC
  PetscInt :: i,val
  PetscInt, target, dimension(3) ::  numComp
  PetscInt, pointer :: pNumComp(:)
  PetscInt, target, dimension(12) ::  numDof
  PetscInt, pointer :: pNumDof(:)
  PetscInt, target, dimension(1) ::  bcField
  PetscInt, pointer :: pBcField(:)
  PetscInt :: zero,eight
  IS, target, dimension(1) ::   bcCompIS
  IS, target, dimension(1) ::   bcPointIS
  IS, pointer :: pBcCompIS(:)
  IS, pointer :: pBcPointIS(:)
  PetscBool :: interpolate
  PetscErrorCode :: ierr

  call PetscInitialize(PETSC_NULL_CHARACTER, ierr)
  if (ierr .ne. 0) then
print*,'Unable to initialize PETSc'
stop
  endif
  dim = 2
  call PetscOptionsGetInt(PETSC_NULL_OPTIONS,PETSC_NULL_CHARACTER,'-dim', dim,PETSC_NULL_BOOL, ierr);CHKERRA(ierr)
  interpolate = PETSC_TRUE
! Create a mesh
  if (dim .eq. 2) then
 numCells = 2
  else
 numCells = 1
  endif
  call DMPlexCreateBoxMesh(PETSC_COMM_WORLD, dim, numCells,interpolate, dm, ierr);CHKERRA(ierr)
! Create a scalar field u, a vector field v, and a surface vector field w
  numFields  = 3
  numComp(1) = 1
  numComp(2) = 1
  numComp(3) = 1
  pNumComp => numComp
  do i = 1, numFields*(dim+1)
 numDof(i) = 0
  end do
! Let u be defined on vertices
  numDof(0*(dim+1)+1) = 1
! Let v be defined on cells
  numDof(1*(dim+1)+1) = 1
! Let v be defined on faces
  numDof(2*(dim+1)+1)   = 1
  pNumDof => numDof
! Setup boundary conditions
  numBC = 1
! Test label retrieval
  call DMGetLabel(dm, 'marker', label, ierr);CHKERRA(ierr)
  zero = 0
  call DMLabelGetValue(label, zero, val, ierr);CHKERRA(ierr)
  if (val .ne. -1) then
CHKERRA(1)
  endif
  eight = 8
  call DMLabelGetValue(label, eight, val, ierr);CHKERRA(ierr)
  if (val .ne. 1) then
CHKERRA(1)
  endif
! Prescribe a Dirichlet condition on u on the boundary
!   Label "marker" is made by the mesh creation routine
  bcField(1) = 0
  pBcField => bcField
  call ISCreateStride(PETSC_COMM_WORLD, 1, 0, 1, bcCompIS(1), ierr);CHKERRA(ierr)
  pBcCompIS => bcCompIS
  call DMGetStratumIS(dm, 'marker', 1, bcPointIS(1),ierr);CHKERRA(ierr)
  pBcPointIS => bcPointIS
! Create a PetscSection with this data layout
  call DMPlexCreateSection(dm,dim,numFields,pNumComp,pNumDof,numBC,pBcField,pBcCompIS,pBcPointIS,PETSC_NULL_IS,section,ierr)
  CHKERRA(ierr)
  call ISDestroy(bcCompIS(1), ierr);CHKERRA(ierr)
  call ISDestroy(bcPointIS(1), ierr);CHKERRA(ierr)
! Name the Field variables
  call PetscSectionSetFieldName(section, 0, 'u', ierr);CHKERRA(ierr)
  call PetscSectionSetFieldName(section, 1, 'v', ierr);CHKERRA(ierr)
  call PetscSectionSetFieldName(section, 2, 'w', ierr);CHKERRA(ierr)
  call PetscSectionView(section, PETSC_VIEWER_STDOUT_WORLD, ierr);CHKERRA(ierr)
! Tell the DM to use this data layout
  call DMSetDefaultSection(dm, section, ierr);CHKERRA(ierr)
! Create a Vec with this layout and view it
  call DMGetGlobalVector(dm, u, ierr);CHKERRA(ierr)
  call VecDuplicate(u,v,ierr);CHKERRA(ierr)

  call PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr);CHKERRA(ierr)
  call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr);CHKERRA(ierr)
  call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr);CHKERRA(ierr)
  call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr)

  call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr)
  call VecView(u, 

Re: [petsc-users] Cell type for DMPlexCreateFromCellList

2018-02-23 Thread Danyang Su


On 18-02-23 03:04 AM, Matthew Knepley wrote:
On Fri, Feb 23, 2018 at 1:33 AM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi All,

What cell types does DMPlexCreateFromCellList support? I test this
with triangle, tetrahedron and prism. Both triangle and
tetrahedron work but prism mesh throws error saying "Cone size 6
not supported for dimension 3".

Could anyone tell me all the supported cell types?


The limitation occurs in two places:

  1) Calculating edges and faces: I only know how to do this for tri, 
tet, quad, and hex. Give PETSC_FALSE to CreateFromCellList() and this 
error will go away.
Passing PETSC_FALSE to CreateFromCellList() works. There is no problem 
in the distributed nodes and cells. Is there any plan to add 
prism(wedge) in the near future?


      You could also provide the information for interpolating prisms, 
which would depend on how they are ordered when you read in cells.


  2) Even if you read them in, I have no geometric routines for prisms.
That's fine for me. I just need to create dmplex based on the given 
cell/vertex list and distribute over all the processors. Then follow the 
general routines that have been used in the structured grid.


Thanks,

Danyang


  Thanks,

     Matt

Thanks,

Danyang




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




[petsc-users] Cell type for DMPlexCreateFromCellList

2018-02-22 Thread Danyang Su

Hi All,

What cell types does DMPlexCreateFromCellList support? I test this with 
triangle, tetrahedron and prism. Both triangle and tetrahedron work but 
prism mesh throws error saying "Cone size 6 not supported for dimension 3".


Could anyone tell me all the supported cell types?

Thanks,

Danyang



Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-22 Thread Danyang Su

Hi Matt,

Just to let you know that after updating to PETSc 3.8.3 version, the 
DMPlexCreateSection in my code now works.


One more question, what is the PETSC_NULL_XXX for IS pointer, as shown 
below, in C code, it just pass NULL, but in fortran, what is the name of 
null object for pBcCompIS and pBcPointIS.


    call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,   &
numFields,pNumComp,pNumDof,   &
numBC,pBcField,   &
pBcCompIS,pBcPointIS, &
 PETSC_NULL_IS,section,ierr)
    CHKERRQ(ierr)

Thanks,

Danyang

On 18-02-21 09:22 AM, Danyang Su wrote:


Hi Matt,

To test the Segmentation Violation problem in my code, I modified the 
example ex1f90.F to reproduce the problem I have in my own code.


If use DMPlexCreateBoxMesh to generate the mesh, the code works fine. 
However, if I use DMPlexCreateGmshFromFile, using the same mesh 
exported from "DMPlexCreateBoxMesh", it gives Segmentation Violation 
error.


Did I miss something in the input mesh file? My first guess is the 
label "marker" used in the code, but I couldn't find any place to set 
this label.


Would you please let me know how to solve this problem. My code is 
done in a similar way as ex1f90, it reads mesh from external file or 
creates from cell list, distributes the mesh (these already work), and 
then creates sections and sets ndof to the nodes.


Thanks,

Danyang


On 18-02-20 10:07 AM, Danyang Su wrote:

On 18-02-20 09:52 AM, Matthew Knepley wrote:
On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi All,

I tried to compile the DMPlexCreateSection code but got error
information as shown below.

Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then
the code can be compiled but run into Segmentation Violation
error in DMPlexCreateSection.

From the webpage

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html 



The F90 version is DMPlexCreateSectionF90. Doing this with F77 
arrays would have been too painful.

Hi Matt,

Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 
instead of DMPlexCreateSection. Would you please tell me in more 
details?


undefined reference to `dmplexcreatesectionf90_'

then I #include , but this throws 
more error during compilation.



    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:



  Thanks,

     Matt

dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is

http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90

<http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90>.


What parameters should I use if passing null to bcField,
bcComps, bcPoints and perm.

PetscErrorCode

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscErrorCode.html#PetscErrorCode>
  DMPlexCreateSection

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateSection.html#DMPlexCreateSection>(DM
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DM.html#DM> 
 dm,PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  dim,PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numFields,constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numComp[],constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numDof[],PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numBC,constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  bcField[],
constIS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 bcComps[], constIS
<https://www.mcs.anl.gov/pe

Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-21 Thread Danyang Su

Hi Matt,

To test the Segmentation Violation problem in my code, I modified the 
example ex1f90.F to reproduce the problem I have in my own code.


If use DMPlexCreateBoxMesh to generate the mesh, the code works fine. 
However, if I use DMPlexCreateGmshFromFile, using the same mesh exported 
from "DMPlexCreateBoxMesh", it gives Segmentation Violation error.


Did I miss something in the input mesh file? My first guess is the label 
"marker" used in the code, but I couldn't find any place to set this label.


Would you please let me know how to solve this problem. My code is done 
in a similar way as ex1f90, it reads mesh from external file or creates 
from cell list, distributes the mesh (these already work), and then 
creates sections and sets ndof to the nodes.


Thanks,

Danyang


On 18-02-20 10:07 AM, Danyang Su wrote:

On 18-02-20 09:52 AM, Matthew Knepley wrote:
On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi All,

I tried to compile the DMPlexCreateSection code but got error
information as shown below.

Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then
the code can be compiled but run into Segmentation Violation
error in DMPlexCreateSection.

From the webpage

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html 



The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays 
would have been too painful.

Hi Matt,

Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 
instead of DMPlexCreateSection. Would you please tell me in more details?


undefined reference to `dmplexcreatesectionf90_'

then I #include , but this throws more 
error during compilation.



    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:



  Thanks,

     Matt

dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is

http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90

<http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90>.


What parameters should I use if passing null to bcField, bcComps,
bcPoints and perm.

PetscErrorCode

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscErrorCode.html#PetscErrorCode>
  DMPlexCreateSection

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateSection.html#DMPlexCreateSection>(DM
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DM.html#DM> 
 dm,PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  dim,PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numFields,constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numComp[],constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numDof[],PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numBC,constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  bcField[],
constIS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 bcComps[], constIS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 bcPoints[],IS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 perm,PetscSection

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/PetscSection.html#PetscSection>
  *section)

#include 
#include 
#include 

...

#ifdef USG
    numFields = 1
    numComp(1) = 1
    pNumComp => numComp

    do i = 1, numFields*(dmda_flow%dim+1)
  numDof(i) = 0
    end do
    numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof
    pNumDof => numDof

    numBC = 0

   

Re: [petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-20 Thread Danyang Su

On 18-02-20 09:52 AM, Matthew Knepley wrote:
On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi All,

I tried to compile the DMPlexCreateSection code but got error
information as shown below.

Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then
the code can be compiled but run into Segmentation Violation error
in DMPlexCreateSection.

From the webpage

http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html 



The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays 
would have been too painful.

Hi Matt,

Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 
instead of DMPlexCreateSection. Would you please tell me in more details?


undefined reference to `dmplexcreatesectionf90_'

then I #include , but this throws more 
error during compilation.



    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:

  PETSCSECTION_HIDE section
  1
Error: Unclassifiable statement at (1)
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10:
    Included at 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6:

    Included at ../../solver/solver_ddmethod.F90:62:



  Thanks,

     Matt

dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is

http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90

<http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90>.


What parameters should I use if passing null to bcField, bcComps,
bcPoints and perm.

PetscErrorCode

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscErrorCode.html#PetscErrorCode>
  DMPlexCreateSection

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateSection.html#DMPlexCreateSection>(DM
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DM.html#DM> 
 dm,PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  dim,PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numFields,constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numComp[],constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numDof[],PetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  numBC,constPetscInt

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Vec/PetscInt.html#PetscInt>
  bcField[],
constIS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 bcComps[], constIS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 bcPoints[],IS
<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/IS.html#IS> 
 perm,PetscSection

<https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/IS/PetscSection.html#PetscSection>
  *section)

#include 
#include 
#include 

...

#ifdef USG
    numFields = 1
    numComp(1) = 1
    pNumComp => numComp

    do i = 1, numFields*(dmda_flow%dim+1)
  numDof(i) = 0
    end do
    numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof
    pNumDof => numDof

    numBC = 0

    call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, &
numFields,pNumComp,pNumDof, &
numBC,PETSC_NULL_INTEGER, &
PETSC_NULL_IS,PETSC_NULL_IS, & !Error here
PETSC_NULL_IS,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionSetFieldName(section,0,'flow',ierr)
    CHKERRQ(ierr)

    call DMSetDefaultSection(dmda_flow%da,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionDestroy(section,ierr)
    CHKERRQ(ierr)
#endif

Thanks,

Danyang




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




[petsc-users] Question on DMPlexCreateSection for Fortran

2018-02-20 Thread Danyang Su

Hi All,

I tried to compile the DMPlexCreateSection code but got error 
information as shown below.


Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type

I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code 
can be compiled but run into Segmentation Violation error in 
DMPlexCreateSection.


dmda_flow%da is distributed dm object that works fine.

The fortran example I follow is 
http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90. 



What parameters should I use if passing null to bcField, bcComps, 
bcPoints and perm.


PetscErrorCode 
  DMPlexCreateSection 
(DM 
  dm,PetscInt 
  dim,PetscInt 
  numFields,constPetscInt 
  numComp[],constPetscInt 
  numDof[],PetscInt 
  numBC,constPetscInt 
  bcField[],
constIS 
  bcComps[], constIS 
  bcPoints[],IS 
  perm,PetscSection 
  *section)


#include 
#include 
#include 

...

#ifdef USG
    numFields = 1
    numComp(1) = 1
    pNumComp => numComp

    do i = 1, numFields*(dmda_flow%dim+1)
  numDof(i) = 0
    end do
    numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof
    pNumDof => numDof

    numBC = 0

    call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,  &
numFields,pNumComp,pNumDof,   &
numBC,PETSC_NULL_INTEGER,   &
PETSC_NULL_IS,PETSC_NULL_IS, & !Error here
 PETSC_NULL_IS,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionSetFieldName(section,0,'flow',ierr)
    CHKERRQ(ierr)

    call DMSetDefaultSection(dmda_flow%da,section,ierr)
    CHKERRQ(ierr)

    call PetscSectionDestroy(section,ierr)
    CHKERRQ(ierr)
#endif

Thanks,

Danyang



[petsc-users] how to check if cell is local owned in DMPlex

2018-02-19 Thread Danyang Su

Hi Matt,

Would you please let me know how to check if a cell is local owned? When 
overlap is 0 in DMPlexDistribute, all the cells are local owned. How 
about overlap > 0? It sounds like impossible to check by node because a 
cell can be local owned even if none of the nodes in this cell is local 
owned.


Thanks,

Danyang



Re: [petsc-users] Error when use DMPlexGetVertexNumbering

2018-02-16 Thread Danyang Su

On 18-02-16 10:50 AM, Matthew Knepley wrote:
On Fri, Feb 16, 2018 at 1:45 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matt,

I try to get the global vertex index and cell index from local
mesh and run into problem. What I need is local to global index
(the original index used in DMPlexCreateFromCellList is best, as
user know exactly where the node/cell is) for vertices and cells,
which will be used to assign material properties and some
parameters to the specified cell/vertex.


I would recommend doing this before you distribute the mesh. Just set 
these properties using a DMLabel and it will be automatically distributed.

I will try this. Thanks.


I can use coordinates to select vertex/cell which has already
included, but still want to keep this feature. This is pretty
straightforward when using structured grid. For the unstructured
grid, I just got compiling error saying "You need a ISO C
conforming compiler to use the glibc headers"


For any compilation problem, you have to send the configure.log and 
make.log. However, it appears that you are not using the same compiler 
that you configured with.
Sorry for confusion. There are linux-gnu-dbg (debug version) and 
linux-gnu-opt (optimized version) configuration, the one I used to 
compile is debug version and the attached one is optimized version. I 
will try you recommendation first.


Thanks,

Danyang


   Matt

Would you please let me know if I need to change the configuration
of PETSc or is there any alternative ways to avoid using
DMPlexGetVertexNumbering and DMPlexGetCellNumbering but get local
to global index?

The error information during compilation is shown below, followed
by PETSc configuration.

 -o ../../solver/solver_ddmethod.o ../../solver/solver_ddmethod.F90

In file included from /usr/include/features.h:375:0,
 from /usr/include/stdio.h:28,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5,
 from ../../solver/solver_ddmethod.F90:4122:
/usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You
need a ISO C conforming compiler to us\
e the glibc headers"
 # error "You need a ISO C conforming compiler to use the glibc
headers"
   ^
In file included from /usr/include/features.h:399:0,
 from /usr/include/stdio.h:28,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7,
 from
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5,
 from ../../solver/solver_ddmethod.F90:4122:
/usr/include/x86_64-linux-gnu/gnu/stubs.h:7:0: fatal error:
gnu/stubs-32.h: No such file or directory
 # include 
 ^
compilation terminated.
make: [../../solver/solver_ddmethod.o] Error 1 (ignored)


PETSc configuration
--with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps
--download-scalapack --download-parmetis --download-metis
--download-ptscotch --download-fblaslapack --download-mpich
--download-hypre --download-superlu_dist --download-hdf5=yes
--with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native"
CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3
-march=native -mtune=native"

Thanks and regards,

Danyang




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




[petsc-users] Error when use DMPlexGetVertexNumbering

2018-02-16 Thread Danyang Su

Hi Matt,

I try to get the global vertex index and cell index from local mesh and 
run into problem. What I need is local to global index (the original 
index used in DMPlexCreateFromCellList is best, as user know exactly 
where the node/cell is) for vertices and cells, which will be used to 
assign material properties and some parameters to the specified 
cell/vertex. I can use coordinates to select vertex/cell which has 
already included, but still want to keep this feature. This is pretty 
straightforward when using structured grid. For the unstructured grid, I 
just got compiling error saying "You need a ISO C conforming compiler to 
use the glibc headers"


Would you please let me know if I need to change the configuration of 
PETSc or is there any alternative ways to avoid using 
DMPlexGetVertexNumbering and DMPlexGetCellNumbering but get local to 
global index?


The error information during compilation is shown below, followed by 
PETSc configuration.


 -o ../../solver/solver_ddmethod.o ../../solver/solver_ddmethod.F90

In file included from /usr/include/features.h:375:0,
 from /usr/include/stdio.h:28,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161,

 from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5,

 from ../../solver/solver_ddmethod.F90:4122:
/usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You need 
a ISO C conforming compiler to us\

e the glibc headers"
 # error "You need a ISO C conforming compiler to use the glibc headers"
   ^
In file included from /usr/include/features.h:399:0,
 from /usr/include/stdio.h:28,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161,

 from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7,
 from 
/home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5,

 from ../../solver/solver_ddmethod.F90:4122:
/usr/include/x86_64-linux-gnu/gnu/stubs.h:7:0: fatal error: 
gnu/stubs-32.h: No such file or directory

 # include 
 ^
compilation terminated.
make: [../../solver/solver_ddmethod.o] Error 1 (ignored)


PETSc configuration
--with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps 
--download-scalapack --download-parmetis --download-metis 
--download-ptscotch --download-fblaslapack --download-mpich 
--download-hypre --download-superlu_dist --download-hdf5=yes 
--with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" 
CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 
-march=native -mtune=native"


Thanks and regards,

Danyang



Re: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile

2018-02-16 Thread Danyang Su

On 18-02-16 10:13 AM, Matthew Knepley wrote:
On Fri, Feb 16, 2018 at 11:36 AM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


On 18-02-15 05:57 PM, Matthew Knepley wrote:


On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:

Hi Matt,

I have a question on DMPlexCreateFromCellList and
DMPlexCreateFromFile. When use DMPlexCreateFromFile with Gmsh
file input, it works fine and each processor gets its own
part. However, when use DMPlexCreateFromCellList, all the
processors have the same global mesh. To my understand, I
should put the global mesh as input, right?


No. Each process should get part of the mesh in
CreateFromCellList(), but the most common thing to do is to
feed the whole mesh in on proc 0, and nothing in on the other procs.

Thanks for the explanation. It works now.


Great. Also feel free to suggest improvements, examples, or better 
documentation.
Thanks, I will bother you a lot recently while porting the code from 
structured grid version to unstructured grid version. Thanks in advance.

Danyang


  Thanks,

     Matt

Danyang


  Thanks,

    Matt

Otherwise, I should use DMPlexCreateFromCellListParallel
instead if the input is local mesh.

Below is the test code I use, results from method 1 is wrong
and that from method 2 is correct. Would you please help to
check if I did anything wrong with DMPlexCreateFromCellList
input?

!test with 4 processor, global num_cells = 8268, global
num_nodes = 4250

!correct results

 check rank    2  istart 2034 iend 3116
 check rank    3  istart 2148 iend 3293
 check rank    1  istart 2044 iend 3133
 check rank    0  istart 2042 iend 3131

!wrong results

  check rank    0  istart 8268  iend    12518
  check rank    1  istart 8268  iend    12518
  check rank    2  istart 8268  iend    12518
  check rank    3  istart 8268  iend    12518


  !c *    test part *
  !c method 1: create DMPlex from cell list, same
duplicated global meshes over all processors
  !c the input parameters num_cells, num_nodes,
dmplex_cells, dmplex_verts are all global parameters (global
mesh data)
  call
DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, &
num_nodes,num_nodes_per_cell,  &
Petsc_True,dmplex_cells,ndim,  &
dmplex_verts,dmda_flow%da,ierr)
  CHKERRQ(ierr)


  !c method 2: create DMPlex from Gmsh file, for test
purpose, this works fine, each processor gets its own part
  call DMPlexCreateFromFile(Petsc_Comm_World, &
prefix(:l_prfx)//'.msh',0, &
dmda_flow%da,ierr)
  CHKERRQ(ierr)

  !c *end of test part*


  distributedMesh = PETSC_NULL_OBJECT

  !c distribute mesh over processes
  call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, &
distributedMesh,ierr)
  CHKERRQ(ierr)

  !c destroy original global mesh after distribution
  if (distributedMesh /= PETSC_NULL_OBJECT) then
    call DMDestroy(dmda_flow%da,ierr)
    CHKERRQ(ierr)
    !c set the global mesh as distributed mesh
    dmda_flow%da = distributedMesh
  end if

  !c get coordinates
  call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
  CHKERRQ(ierr)

  call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
  CHKERRQ(ierr)

  call DMGetDefaultSection(cda,cs,ierr)
  CHKERRQ(ierr)

  call PetscSectionGetChart(cs,istart,iend,ierr)
  CHKERRQ(ierr)

#ifdef DEBUG
    if(info_debug > 0) then
  write(*,*) "check rank ",rank," istart ",istart,"
iend ",iend
    end if
#endif


Thanks and regards,

Danyang




-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/
<http://www.caam.rice.edu/%7Emk51/>





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




Re: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile

2018-02-16 Thread Danyang Su



On 18-02-15 05:57 PM, Matthew Knepley wrote:
On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matt,

I have a question on DMPlexCreateFromCellList and
DMPlexCreateFromFile. When use DMPlexCreateFromFile with Gmsh file
input, it works fine and each processor gets its own part.
However, when use DMPlexCreateFromCellList, all the processors
have the same global mesh. To my understand, I should put the
global mesh as input, right?


No. Each process should get part of the mesh in CreateFromCellList(), 
but the most common thing to do is to

feed the whole mesh in on proc 0, and nothing in on the other procs.

Thanks for the explanation. It works now.

Danyang


  Thanks,

    Matt

Otherwise, I should use DMPlexCreateFromCellListParallel instead
if the input is local mesh.

Below is the test code I use, results from method 1 is wrong and
that from method 2 is correct. Would you please help to check if I
did anything wrong with DMPlexCreateFromCellList input?

!test with 4 processor, global num_cells = 8268, global num_nodes
= 4250

!correct results

 check rank    2  istart 2034 iend 3116
 check rank    3  istart 2148 iend 3293
 check rank    1  istart 2044 iend 3133
 check rank    0  istart 2042 iend 3131

!wrong results

  check rank    0  istart 8268 iend    12518
  check rank    1  istart 8268 iend    12518
  check rank    2  istart 8268 iend    12518
  check rank    3  istart 8268 iend    12518


  !c *    test part *
  !c method 1: create DMPlex from cell list, same duplicated
global meshes over all processors
  !c the input parameters num_cells, num_nodes, dmplex_cells,
dmplex_verts are all global parameters (global mesh data)
  call DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, &
num_nodes,num_nodes_per_cell,  &
Petsc_True,dmplex_cells,ndim,  &
dmplex_verts,dmda_flow%da,ierr)
  CHKERRQ(ierr)


  !c method 2: create DMPlex from Gmsh file, for test purpose,
this works fine, each processor gets its own part
  call DMPlexCreateFromFile(Petsc_Comm_World, &
prefix(:l_prfx)//'.msh',0,  &
  dmda_flow%da,ierr)
  CHKERRQ(ierr)

  !c *end of test part*


  distributedMesh = PETSC_NULL_OBJECT

  !c distribute mesh over processes
  call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, &
    distributedMesh,ierr)
  CHKERRQ(ierr)

  !c destroy original global mesh after distribution
  if (distributedMesh /= PETSC_NULL_OBJECT) then
    call DMDestroy(dmda_flow%da,ierr)
    CHKERRQ(ierr)
    !c set the global mesh as distributed mesh
    dmda_flow%da = distributedMesh
  end if

  !c get coordinates
  call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
  CHKERRQ(ierr)

  call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
  CHKERRQ(ierr)

  call DMGetDefaultSection(cda,cs,ierr)
  CHKERRQ(ierr)

  call PetscSectionGetChart(cs,istart,iend,ierr)
  CHKERRQ(ierr)

#ifdef DEBUG
    if(info_debug > 0) then
  write(*,*) "check rank ",rank," istart ",istart," iend
",iend
    end if
#endif


Thanks and regards,

Danyang




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>




Re: [petsc-users] Is OpenMP still available for PETSc?

2017-06-30 Thread Danyang Su

Hi Barry,

Thanks for the quick response. What I want to test is to check if OpenMP 
has any benefit when total degrees of freedoms per processor drops below 
5k. When using pure MPI my code shows good speedup if total degrees of 
freedoms per processor is above 10k. But below this value, the parallel 
efficiency decreases.


The petsc 3.6 change log indicates

 * Removed all threadcomm support including --with-pthreadclasses and
   --with-openmpclasses configure arguments

I guess petsc 3.5 version is the last version I can test, right?

Thanks,

Danyang


On 17-06-30 03:49 PM, Barry Smith wrote:

   The current version of PETSc does not use OpenMP, you are free to use OpenMP 
in your portions of the code of course. If you want PETSc using OpenMP you have 
to use the old, unsupported version of PETSc. We never found any benefit to 
using OpenMP.

Barry


On Jun 30, 2017, at 5:40 PM, Danyang Su <danyang...@gmail.com> wrote:

Dear All,

I recalled there was OpenMP available for PETSc for the old development version. When google 
"petsc hybrid mpi openmp", there returned some papers about this feature. My code was 
first parallelized using OpenMP and then redeveloped using PETSc, with OpenMP kept but not used 
together with MPI. Before retesting the code using hybrid mpi-openmp, I picked one PETSc example 
ex10 by adding "omp_set_num_threads(max_threads);" under PetscInitialize.

The PETSc is the current development version configured as follows

--with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-debugging=0 --CFLAGS=-fopenmp --CXXFLAGS=-fopenmp 
--FFLAGS=-fopenmp COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native 
-mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --with-large-file-io=1 
--download-cmake=yes --download-mumps --download-scalapack --download-parmetis --download-metis 
--download-ptscotch --download-fblaslapack --download-mpich --download-hypre --download-superlu_dist 
--download-hdf5=yes --with-openmp --with-threadcomm --with-pthreadclasses --with-openmpclasses

The code can be successfully compiled. However, when I run the code with 
OpenMP, it does not work, the time shows no change in performance if 1 or 2 
threads per processor is used. Also, the CPU/Threads usage indicates that no 
thread is used.

I just wonder if OpenMP is still available in the latest version, though it is 
not recommended to use.

mpiexec -n 2 ./ex10 -f0 mat_rhs_pc_nonzero/a_react_in_2.bin -rhs 
mat_rhs_pc_nonzero/b_react_in_2.bin -ksp_rtol 1.0e-20 -ksp_monitor 
-ksp_error_if_not_converged -sub_pc_factor_shift_type nonzero -mat_view 
ascii::ascii_info -log_view -max_threads 1 -threadcomm_type openmp 
-threadcomm_nthreads 1

KSPSolve   1 1.0 8.9934e-01 1.0 1.03e+09 1.0 7.8e+01 3.6e+04 
7.8e+01 69 97 89  6 76  89 97 98 98 96  2290
PCSetUp2 1.0 8.9590e-02 1.0 2.91e+07 1.0 0.0e+00 0.0e+00 
0.0e+00  7  3  0  0  0   9  3  0  0  0   648
PCSetUpOnBlocks2 1.0 8.9465e-02 1.0 2.91e+07 1.0 0.0e+00 0.0e+00 
0.0e+00  7  3  0  0  0   9  3  0  0  0   649
PCApply   40 1.0 3.1993e-01 1.0 2.70e+08 1.0 0.0e+00 0.0e+00 
0.0e+00 24 25  0  0  0  32 25  0  0  0  1686

mpiexec -n 2 ./ex10 -f0 mat_rhs_pc_nonzero/a_react_in_2.bin -rhs 
mat_rhs_pc_nonzero/b_react_in_2.bin -ksp_rtol 1.0e-20 -ksp_monitor 
-ksp_error_if_not_converged -sub_pc_factor_shift_type nonzero -mat_view 
ascii::ascii_info -log_view -max_threads 2 -threadcomm_type openmp 
-threadcomm_nthreads 2

KSPSolve   1 1.0 8.9701e-01 1.0 1.03e+09 1.0 7.8e+01 3.6e+04 
7.8e+01 69 97 89  6 76  89 97 98 98 96  2296
PCSetUp2 1.0 8.7635e-02 1.0 2.91e+07 1.0 0.0e+00 0.0e+00 
0.0e+00  7  3  0  0  0   9  3  0  0  0   663
PCSetUpOnBlocks2 1.0 8.7511e-02 1.0 2.91e+07 1.0 0.0e+00 0.0e+00 
0.0e+00  7  3  0  0  0   9  3  0  0  0   664
PCApply   40 1.0 3.1878e-01 1.0 2.70e+08 1.0 0.0e+00 0.0e+00 
0.0e+00 24 25  0  0  0  32 25  0  0  0  1692

Thanks and regards,

Danyang







Re: [petsc-users] PCFactorSetShiftType does not work in code but -pc_factor_set_shift_type works

2017-05-25 Thread Danyang Su

Hi Hong,

It works like a charm. I really appreciate your help.

Regards,

Danyang


On 17-05-25 07:49 AM, Hong wrote:

Danyang:
You must access inner pc, then set shift. See
petsc/src/ksp/ksp/examples/tutorials/ex7.c

For example, I add following to 
petsc/src/ksp/ksp/examples/tutorials/ex2.c, line 191:

  PetscBool isbjacobi;
  PCpc;
  ierr = KSPGetPC(ksp,);CHKERRQ(ierr);
  ierr = 
PetscObjectTypeCompare((PetscObject)pc,PCBJACOBI,);CHKERRQ(ierr);

  if (isbjacobi) {
PetscInt nlocal;
KSP  *subksp;
PC   subpc;

ierr = KSPSetUp(ksp);CHKERRQ(ierr);
ierr = KSPGetPC(ksp,);CHKERRQ(ierr);

/* Extract the array of KSP contexts for the local blocks */
ierr = PCBJacobiGetSubKSP(pc,,NULL,);CHKERRQ(ierr);
printf("isbjacobi, nlocal %D, set option to subpc...\n",nlocal);
for (i=0; i<nlocal; i++) {
  ierr = KSPGetPC(subksp[i],);CHKERRQ(ierr);
  ierr = PCFactorSetShiftType(subpc,MAT_SHIFT_NONZERO);CHKERRQ(ierr);
}
  }


Dear Hong and Barry,

I have implemented this option in the code, as we also need to use
configuration from file for convenience. When I run the code using
options, it works fine, however, when I run the code using
configuration file, it does not work. The code has two set of
equations, flow and reactive, with prefix been set to "flow_" and
"react_". When I run the code using

mpiexec -n 4 ../executable -flow_sub_pc_factor_shift_type nonzero
-react_sub_pc_factor_shift_type nonzero

it works. However, if I run using

mpiexec -n 4 ../executable

and let the executable file read the options from file, it just
does not work at "call
PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  or none,
positive_definite ...". Do I miss something here?

Below is the pseudo code I have used for flow equations, similar
for reactive equations.

  call MatCreateAIJ(Petsc_Comm_World,nndof,nndof,nngbldof, &
nngbldof,d_nz,PETSC_NULL_INTEGER,o_nz, &
PETSC_NULL_INTEGER,a_flow,ierr)
  CHKERRQ(ierr)

call MatSetFromOptions(a_flow,ierr)
CHKERRQ(ierr)

call KSPCreate(Petsc_Comm_World, ksp_flow, ierr)
CHKERRQ(ierr)

call KSPAppendOptionsPrefix(ksp_flow,"flow_",ierr)
CHKERRQ(ierr)

call KSPSetInitialGuessNonzero(ksp_flow, &
b_initial_guess_nonzero_flow, ierr)
CHKERRQ(ierr)

call KSPSetInitialGuessNonzero(ksp_flow, &
b_initial_guess_nonzero_flow, ierr)
CHKERRQ(ierr)

call KSPSetDM(ksp_flow,dmda_flow%da,ierr)
CHKERRQ(ierr)
call KSPSetDMActive(ksp_flow,PETSC_FALSE,ierr)
CHKERRQ(ierr)

*CHECK IF READ OPTION FROM FILE*
if (read_option_from_file) then

  call KSPSetType(ksp_flow, KSPGMRES, ierr) !or KSPBCGS or
others...
  CHKERRQ(ierr)

  call KSPGetPC(ksp_flow, pc_flow, ierr)
  CHKERRQ(ierr)

  call PCSetType(pc_flow,PCBJACOBI, ierr)   !or PCILU
or PCJACOBI or PCHYPRE ...
  CHKERRQ(ierr)

  call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO,
ierr)  or none, positive_definite ...
  CHKERRQ(ierr)

end if

call
PCFactorGetMatSolverPackage(pc_flow,solver_pkg_flow,ierr)
CHKERRQ(ierr)

call compute_jacobian(rank,dmda_flow%da, &
a_flow,a_in,ia_in,ja_in,nngl_in, &
row_idx_l2pg,col_idx_l2pg,&
  b_non_interlaced)
call KSPSetFromOptions(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSetUp(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSetUpOnBlocks(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
CHKERRQ(ierr)


Thanks and Regards,

Danyang

On 17-05-24 06:32 PM, Hong wrote:

Remove your option '-vecload_block_size 10'.
Hong

On Wed, May 24, 2017 at 3:06 PM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:

Dear Hong,

I just tested with different number of processors for the
same matrix. It sometimes got "ERROR: Arguments are
incompatible" for different number of processors. It works
fine using 4, 8, or 24 processors, but failed with "ERROR:
Arguments are incompatible" using 16 or 48 processors. The
error information is attached. I tested this on my local
computer with 6 cores 12 threads. Any suggestion on this?

Thanks,

Danyang


On 17-05-24 12:28 PM, Danyang Su wrote:


Hi Hong,

   

[petsc-users] PCFactorSetShiftType does not work in code but -pc_factor_set_shift_type works

2017-05-25 Thread Danyang Su

Dear Hong and Barry,

I have implemented this option in the code, as we also need to use 
configuration from file for convenience. When I run the code using 
options, it works fine, however, when I run the code using configuration 
file, it does not work. The code has two set of equations, flow and 
reactive, with prefix been set to "flow_" and "react_". When I run the 
code using


mpiexec -n 4 ../executable -flow_sub_pc_factor_shift_type nonzero 
-react_sub_pc_factor_shift_type nonzero


it works. However, if I run using

mpiexec -n 4 ../executable

and let the executable file read the options from file, it just does not 
work at "call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  or 
none, positive_definite ...". Do I miss something here?


Below is the pseudo code I have used for flow equations, similar for 
reactive equations.


  call MatCreateAIJ(Petsc_Comm_World,nndof,nndof,nngbldof, &
nngbldof,d_nz,PETSC_NULL_INTEGER,o_nz, &
PETSC_NULL_INTEGER,a_flow,ierr)
  CHKERRQ(ierr)

call MatSetFromOptions(a_flow,ierr)
CHKERRQ(ierr)

call KSPCreate(Petsc_Comm_World, ksp_flow, ierr)
CHKERRQ(ierr)

call KSPAppendOptionsPrefix(ksp_flow,"flow_",ierr)
CHKERRQ(ierr)

call KSPSetInitialGuessNonzero(ksp_flow,   &
b_initial_guess_nonzero_flow, ierr)
CHKERRQ(ierr)

call KSPSetInitialGuessNonzero(ksp_flow,   &
b_initial_guess_nonzero_flow, ierr)
CHKERRQ(ierr)

call KSPSetDM(ksp_flow,dmda_flow%da,ierr)
CHKERRQ(ierr)
call KSPSetDMActive(ksp_flow,PETSC_FALSE,ierr)
CHKERRQ(ierr)

*CHECK IF READ OPTION FROM FILE*
if (read_option_from_file) then

  call KSPSetType(ksp_flow, KSPGMRES, ierr)!or KSPBCGS or 
others...

  CHKERRQ(ierr)

  call KSPGetPC(ksp_flow, pc_flow, ierr)
  CHKERRQ(ierr)

  call PCSetType(pc_flow,PCBJACOBI, ierr)   !or PCILU or 
PCJACOBI or PCHYPRE ...

  CHKERRQ(ierr)

  call PCFactorSetShiftType(pc_flow,MAT_SHIFT_NONZERO, ierr)  
or none, positive_definite ...

  CHKERRQ(ierr)

end if

call PCFactorGetMatSolverPackage(pc_flow,solver_pkg_flow,ierr)
CHKERRQ(ierr)

call compute_jacobian(rank,dmda_flow%da,   &
a_flow,a_in,ia_in,ja_in,nngl_in, &
row_idx_l2pg,col_idx_l2pg,   &
  b_non_interlaced)
call KSPSetFromOptions(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSetUp(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSetUpOnBlocks(ksp_flow,ierr)
CHKERRQ(ierr)

call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
CHKERRQ(ierr)


Thanks and Regards,

Danyang

On 17-05-24 06:32 PM, Hong wrote:

Remove your option '-vecload_block_size 10'.
Hong

On Wed, May 24, 2017 at 3:06 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Dear Hong,

I just tested with different number of processors for the same
matrix. It sometimes got "ERROR: Arguments are incompatible" for
different number of processors. It works fine using 4, 8, or 24
processors, but failed with "ERROR: Arguments are incompatible"
using 16 or 48 processors. The error information is attached. I
tested this on my local computer with 6 cores 12 threads. Any
suggestion on this?

Thanks,

Danyang


On 17-05-24 12:28 PM, Danyang Su wrote:


Hi Hong,

Awesome. Thanks for testing the case. I will try your options for
the code and get back to you later.

Regards,

Danyang


On 17-05-24 12:21 PM, Hong wrote:

Danyang :
I tested your data.
Your matrices encountered zero pivots, e.g.
petsc/src/ksp/ksp/examples/tutorials (master)
$ mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs
b_react_in_2.bin -ksp_monitor -ksp_error_if_not_converged

[15]PETSC ERROR: Zero pivot in LU factorization:
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
<http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot>
[15]PETSC ERROR: Zero pivot row 1249 value 2.05808e-14 tolerance
2.22045e-14
...

Adding option '-sub_pc_factor_shift_type nonzero', I got
mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin
-ksp_monitor -ksp_error_if_not_converged
-sub_pc_factor_shift_type nonzero -mat_view ascii::ascii_info

Mat Object: 24 MPI processes
  type: mpiaij
  rows=45, cols=45
  total: nonzeros=6991400, allocated nonzeros=6991400
  total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
  0 KSP Residual norm 5.84911755e+01
  1 KSP Residual norm

Re: [petsc-users] Question on incomplete factorization level and fill

2017-05-24 Thread Danyang Su

Hi All,

I just delete the .info file and it works without problem now.

Thanks,

Danyang


On 17-05-24 06:32 PM, Hong wrote:

Remove your option '-vecload_block_size 10'.
Hong

On Wed, May 24, 2017 at 3:06 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Dear Hong,

I just tested with different number of processors for the same
matrix. It sometimes got "ERROR: Arguments are incompatible" for
different number of processors. It works fine using 4, 8, or 24
processors, but failed with "ERROR: Arguments are incompatible"
using 16 or 48 processors. The error information is attached. I
tested this on my local computer with 6 cores 12 threads. Any
suggestion on this?

Thanks,

Danyang


    On 17-05-24 12:28 PM, Danyang Su wrote:


Hi Hong,

Awesome. Thanks for testing the case. I will try your options for
the code and get back to you later.

Regards,

Danyang


On 17-05-24 12:21 PM, Hong wrote:

Danyang :
I tested your data.
Your matrices encountered zero pivots, e.g.
petsc/src/ksp/ksp/examples/tutorials (master)
$ mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs
b_react_in_2.bin -ksp_monitor -ksp_error_if_not_converged

[15]PETSC ERROR: Zero pivot in LU factorization:
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
<http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot>
[15]PETSC ERROR: Zero pivot row 1249 value 2.05808e-14 tolerance
2.22045e-14
...

Adding option '-sub_pc_factor_shift_type nonzero', I got
mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin
-ksp_monitor -ksp_error_if_not_converged
-sub_pc_factor_shift_type nonzero -mat_view ascii::ascii_info

Mat Object: 24 MPI processes
  type: mpiaij
  rows=45, cols=45
  total: nonzeros=6991400, allocated nonzeros=6991400
  total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
  0 KSP Residual norm 5.84911755e+01
  1 KSP Residual norm 6.824179430230e-01
  2 KSP Residual norm 3.994483555787e-02
  3 KSP Residual norm 6.085841461433e-03
  4 KSP Residual norm 8.876162583511e-04
  5 KSP Residual norm 9.407780665278e-05
Number of iterations =   5
Residual norm 0.00542891

Hong

Hi Matt,

Yes. The matrix is 45x45 sparse. The hypre takes
hundreds of iterates, not for all but in most of the
timesteps. The matrix is not well conditioned, with nonzero
entries range from 1.0e-29 to 1.0e2. I also made double
check if there is anything wrong in the parallel version,
however, the matrix is the same with sequential version
except some round error which is relatively very small.
Usually for those not well conditioned matrix, direct solver
should be faster than iterative solver, right? But when I
use the sequential iterative solver with ILU prec developed
almost 20 years go by others, the solver converge fast with
appropriate factorization level. In other words, when I use
24 processor using hypre, the speed is almost the same as as
the old sequential iterative solver using 1 processor.

I use most of the default configuration for the general case
with pretty good speedup. And I am not sure if I miss
something for this problem.

Thanks,

Danyang


On 17-05-24 11:12 AM, Matthew Knepley wrote:

On Wed, May 24, 2017 at 12:50 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi Matthew and Barry,

Thanks for the quick response.

I also tried superlu and mumps, both work but it is
about four times slower than ILU(dt) prec through
hypre, with 24 processors I have tested.

You mean the total time is 4x? And you are taking hundreds
of iterates? That seems hard to believe, unless you are
dropping
a huge number of elements.

When I look into the convergence information, the
method using ILU(dt) still takes 200 to 3000 linear
iterations for each newton iteration. One reason is
this equation is hard to solve. As for the general
cases, the same method works awesome and get very good
speedup.

I do not understand what you mean here.

I also doubt if I use hypre correctly for this case. Is
there anyway to check this problem, or is it possible
to increase the factorization level through hypre?

I don't know.

  Matt

Thanks,

Danyang


On 17-05-24 04:59 AM, Matthew Knepley wrote:

        On Wed, May 24, 2017 at 2:21 AM, Danyang Su
<danyang...@gmail

Re: [petsc-users] Question on incomplete factorization level and fill

2017-05-24 Thread Danyang Su

Dear Hong,

I just tested with different number of processors for the same matrix. 
It sometimes got "ERROR: Arguments are incompatible" for different 
number of processors. It works fine using 4, 8, or 24 processors, but 
failed with "ERROR: Arguments are incompatible" using 16 or 48 
processors. The error information is attached. I tested this on my local 
computer with 6 cores 12 threads. Any suggestion on this?


Thanks,

Danyang


On 17-05-24 12:28 PM, Danyang Su wrote:


Hi Hong,

Awesome. Thanks for testing the case. I will try your options for the 
code and get back to you later.


Regards,

Danyang


On 17-05-24 12:21 PM, Hong wrote:

Danyang :
I tested your data.
Your matrices encountered zero pivots, e.g.
petsc/src/ksp/ksp/examples/tutorials (master)
$ mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin 
-ksp_monitor -ksp_error_if_not_converged


[15]PETSC ERROR: Zero pivot in LU factorization: 
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[15]PETSC ERROR: Zero pivot row 1249 value 2.05808e-14 tolerance 
2.22045e-14

...

Adding option '-sub_pc_factor_shift_type nonzero', I got
mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin 
-ksp_monitor -ksp_error_if_not_converged -sub_pc_factor_shift_type 
nonzero -mat_view ascii::ascii_info


Mat Object: 24 MPI processes
  type: mpiaij
  rows=45, cols=45
  total: nonzeros=6991400, allocated nonzeros=6991400
  total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
  0 KSP Residual norm 5.84911755e+01
  1 KSP Residual norm 6.824179430230e-01
  2 KSP Residual norm 3.994483555787e-02
  3 KSP Residual norm 6.085841461433e-03
  4 KSP Residual norm 8.876162583511e-04
  5 KSP Residual norm 9.407780665278e-05
Number of iterations =   5
Residual norm 0.00542891

Hong

Hi Matt,

Yes. The matrix is 45x45 sparse. The hypre takes hundreds
of iterates, not for all but in most of the timesteps. The matrix
is not well conditioned, with nonzero entries range from 1.0e-29
to 1.0e2. I also made double check if there is anything wrong in
the parallel version, however, the matrix is the same with
sequential version except some round error which is relatively
very small. Usually for those not well conditioned matrix, direct
solver should be faster than iterative solver, right? But when I
use the sequential iterative solver with ILU prec developed
almost 20 years go by others, the solver converge fast with
appropriate factorization level. In other words, when I use 24
processor using hypre, the speed is almost the same as as the old
sequential iterative solver using 1 processor.

I use most of the default configuration for the general case with
pretty good speedup. And I am not sure if I miss something for
this problem.

Thanks,

Danyang


On 17-05-24 11:12 AM, Matthew Knepley wrote:

On Wed, May 24, 2017 at 12:50 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi Matthew and Barry,

Thanks for the quick response.

I also tried superlu and mumps, both work but it is about
four times slower than ILU(dt) prec through hypre, with 24
processors I have tested.

You mean the total time is 4x? And you are taking hundreds of
iterates? That seems hard to believe, unless you are dropping
a huge number of elements.

When I look into the convergence information, the method
using ILU(dt) still takes 200 to 3000 linear iterations for
each newton iteration. One reason is this equation is hard
to solve. As for the general cases, the same method works
awesome and get very good speedup.

I do not understand what you mean here.

I also doubt if I use hypre correctly for this case. Is
there anyway to check this problem, or is it possible to
increase the factorization level through hypre?

I don't know.

  Matt

Thanks,

Danyang


On 17-05-24 04:59 AM, Matthew Knepley wrote:

On Wed, May 24, 2017 at 2:21 AM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Dear All,

I use PCFactorSetLevels for ILU and PCFactorSetFill for
other preconditioning in my code to help solve the
problems that the default option is hard to solve.
However, I found the latter one, PCFactorSetFill does
not take effect for my problem. The matrices and rhs as
well as the solutions are attached from the link below.
I obtain the solution using hypre preconditioner and it
takes 7 and 38 iterations for matrix 1 and matrix 2.
However, if I use other preconditioner, the solver just
failed at the first matrix. I have tested this matrix
u

Re: [petsc-users] Question on incomplete factorization level and fill

2017-05-24 Thread Danyang Su

Hi Hong,

Awesome. Thanks for testing the case. I will try your options for the 
code and get back to you later.


Regards,

Danyang


On 17-05-24 12:21 PM, Hong wrote:

Danyang :
I tested your data.
Your matrices encountered zero pivots, e.g.
petsc/src/ksp/ksp/examples/tutorials (master)
$ mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin 
-ksp_monitor -ksp_error_if_not_converged


[15]PETSC ERROR: Zero pivot in LU factorization: 
http://www.mcs.anl.gov/petsc/documentation/faq.html#zeropivot
[15]PETSC ERROR: Zero pivot row 1249 value 2.05808e-14 tolerance 
2.22045e-14

...

Adding option '-sub_pc_factor_shift_type nonzero', I got
mpiexec -n 24 ./ex10 -f0 a_react_in_2.bin -rhs b_react_in_2.bin 
-ksp_monitor -ksp_error_if_not_converged -sub_pc_factor_shift_type 
nonzero -mat_view ascii::ascii_info


Mat Object: 24 MPI processes
  type: mpiaij
  rows=45, cols=45
  total: nonzeros=6991400, allocated nonzeros=6991400
  total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
  0 KSP Residual norm 5.84911755e+01
  1 KSP Residual norm 6.824179430230e-01
  2 KSP Residual norm 3.994483555787e-02
  3 KSP Residual norm 6.085841461433e-03
  4 KSP Residual norm 8.876162583511e-04
  5 KSP Residual norm 9.407780665278e-05
Number of iterations =   5
Residual norm 0.00542891

Hong

Hi Matt,

Yes. The matrix is 45x45 sparse. The hypre takes hundreds
of iterates, not for all but in most of the timesteps. The matrix
is not well conditioned, with nonzero entries range from 1.0e-29
to 1.0e2. I also made double check if there is anything wrong in
the parallel version, however, the matrix is the same with
sequential version except some round error which is relatively
very small. Usually for those not well conditioned matrix, direct
solver should be faster than iterative solver, right? But when I
use the sequential iterative solver with ILU prec developed almost
20 years go by others, the solver converge fast with appropriate
factorization level. In other words, when I use 24 processor using
hypre, the speed is almost the same as as the old sequential
iterative solver using 1 processor.

I use most of the default configuration for the general case with
pretty good speedup. And I am not sure if I miss something for
this problem.

Thanks,

Danyang


On 17-05-24 11:12 AM, Matthew Knepley wrote:

On Wed, May 24, 2017 at 12:50 PM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Hi Matthew and Barry,

Thanks for the quick response.

I also tried superlu and mumps, both work but it is about
four times slower than ILU(dt) prec through hypre, with 24
processors I have tested.

You mean the total time is 4x? And you are taking hundreds of
iterates? That seems hard to believe, unless you are dropping
a huge number of elements.

When I look into the convergence information, the method
using ILU(dt) still takes 200 to 3000 linear iterations for
each newton iteration. One reason is this equation is hard to
solve. As for the general cases, the same method works
awesome and get very good speedup.

I do not understand what you mean here.

I also doubt if I use hypre correctly for this case. Is there
anyway to check this problem, or is it possible to increase
the factorization level through hypre?

I don't know.

  Matt

Thanks,

Danyang


On 17-05-24 04:59 AM, Matthew Knepley wrote:

On Wed, May 24, 2017 at 2:21 AM, Danyang Su
<danyang...@gmail.com <mailto:danyang...@gmail.com>> wrote:

Dear All,

I use PCFactorSetLevels for ILU and PCFactorSetFill for
other preconditioning in my code to help solve the
problems that the default option is hard to solve.
However, I found the latter one, PCFactorSetFill does
not take effect for my problem. The matrices and rhs as
well as the solutions are attached from the link below.
I obtain the solution using hypre preconditioner and it
takes 7 and 38 iterations for matrix 1 and matrix 2.
However, if I use other preconditioner, the solver just
failed at the first matrix. I have tested this matrix
using the native sequential solver (not PETSc) with ILU
preconditioning. If I set the incomplete factorization
level to 0, this sequential solver will take more than
100 iterations. If I increase the factorization level to
1 or more, it just takes several iterations. This remind
me that the PC factor for this matrices should be
increased. However, when I tried it in PETSc, it just
does not work.

Matrix

Re: [petsc-users] Question on incomplete factorization level and fill

2017-05-24 Thread Danyang Su

Hi Matt,

Yes. The matrix is 45x45 sparse. The hypre takes hundreds of 
iterates, not for all but in most of the timesteps. The matrix is not 
well conditioned, with nonzero entries range from 1.0e-29 to 1.0e2. I 
also made double check if there is anything wrong in the parallel 
version, however, the matrix is the same with sequential version except 
some round error which is relatively very small. Usually for those not 
well conditioned matrix, direct solver should be faster than iterative 
solver, right? But when I use the sequential iterative solver with ILU 
prec developed almost 20 years go by others, the solver converge fast 
with appropriate factorization level. In other words, when I use 24 
processor using hypre, the speed is almost the same as as the old 
sequential iterative solver using 1 processor.


I use most of the default configuration for the general case with pretty 
good speedup. And I am not sure if I miss something for this problem.


Thanks,

Danyang


On 17-05-24 11:12 AM, Matthew Knepley wrote:
On Wed, May 24, 2017 at 12:50 PM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Hi Matthew and Barry,

Thanks for the quick response.

I also tried superlu and mumps, both work but it is about four
times slower than ILU(dt) prec through hypre, with 24 processors I
have tested.

You mean the total time is 4x? And you are taking hundreds of 
iterates? That seems hard to believe, unless you are dropping

a huge number of elements.

When I look into the convergence information, the method using
ILU(dt) still takes 200 to 3000 linear iterations for each newton
iteration. One reason is this equation is hard to solve. As for
the general cases, the same method works awesome and get very good
speedup.

I do not understand what you mean here.

I also doubt if I use hypre correctly for this case. Is there
anyway to check this problem, or is it possible to increase the
factorization level through hypre?

I don't know.

  Matt

Thanks,

Danyang


On 17-05-24 04:59 AM, Matthew Knepley wrote:

On Wed, May 24, 2017 at 2:21 AM, Danyang Su <danyang...@gmail.com
<mailto:danyang...@gmail.com>> wrote:

Dear All,

I use PCFactorSetLevels for ILU and PCFactorSetFill for other
preconditioning in my code to help solve the problems that
the default option is hard to solve. However, I found the
latter one, PCFactorSetFill does not take effect for my
problem. The matrices and rhs as well as the solutions are
attached from the link below. I obtain the solution using
hypre preconditioner and it takes 7 and 38 iterations for
matrix 1 and matrix 2. However, if I use other
preconditioner, the solver just failed at the first matrix. I
have tested this matrix using the native sequential solver
(not PETSc) with ILU preconditioning. If I set the incomplete
factorization level to 0, this sequential solver will take
more than 100 iterations. If I increase the factorization
level to 1 or more, it just takes several iterations. This
remind me that the PC factor for this matrices should be
increased. However, when I tried it in PETSc, it just does
not work.

Matrix and rhs can be obtained from the link below.

https://eilinator.eos.ubc.ca:8443/index.php/s/CalUcq9CMeblk4R
<https://eilinator.eos.ubc.ca:8443/index.php/s/CalUcq9CMeblk4R>

Would anyone help to check if you can make this work by
increasing the PC factor level or fill?


We have ILU(k) supported in serial. However ILU(dt) which takes a
tolerance only works through Hypre

http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html
<http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html>

I recommend you try SuperLU or MUMPS, which can both be
downloaded automatically by configure, and
do a full sparse LU.

  Thanks,

Matt

Thanks and regards,

Danyang





-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/ <http://www.caam.rice.edu/%7Emk51/>





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

http://www.caam.rice.edu/~mk51/ <http://www.caam.rice.edu/%7Emk51/>




Re: [petsc-users] Question on incomplete factorization level and fill

2017-05-24 Thread Danyang Su

Hi Matthew and Barry,

Thanks for the quick response.

I also tried superlu and mumps, both work but it is about four times 
slower than ILU(dt) prec through hypre, with 24 processors I have 
tested. When I look into the convergence information, the method using 
ILU(dt) still takes 200 to 3000 linear iterations for each newton 
iteration. One reason is this equation is hard to solve. As for the 
general cases, the same method works awesome and get very good speedup. 
I also doubt if I use hypre correctly for this case. Is there anyway to 
check this problem, or is it possible to increase the factorization 
level through hypre?


Thanks,

Danyang


On 17-05-24 04:59 AM, Matthew Knepley wrote:
On Wed, May 24, 2017 at 2:21 AM, Danyang Su <danyang...@gmail.com 
<mailto:danyang...@gmail.com>> wrote:


Dear All,

I use PCFactorSetLevels for ILU and PCFactorSetFill for other
preconditioning in my code to help solve the problems that the
default option is hard to solve. However, I found the latter one,
PCFactorSetFill does not take effect for my problem. The matrices
and rhs as well as the solutions are attached from the link below.
I obtain the solution using hypre preconditioner and it takes 7
and 38 iterations for matrix 1 and matrix 2. However, if I use
other preconditioner, the solver just failed at the first matrix.
I have tested this matrix using the native sequential solver (not
PETSc) with ILU preconditioning. If I set the incomplete
factorization level to 0, this sequential solver will take more
than 100 iterations. If I increase the factorization level to 1 or
more, it just takes several iterations. This remind me that the PC
factor for this matrices should be increased. However, when I
tried it in PETSc, it just does not work.

Matrix and rhs can be obtained from the link below.

https://eilinator.eos.ubc.ca:8443/index.php/s/CalUcq9CMeblk4R
<https://eilinator.eos.ubc.ca:8443/index.php/s/CalUcq9CMeblk4R>

Would anyone help to check if you can make this work by increasing
the PC factor level or fill?


We have ILU(k) supported in serial. However ILU(dt) which takes a 
tolerance only works through Hypre


http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html

I recommend you try SuperLU or MUMPS, which can both be downloaded 
automatically by configure, and

do a full sparse LU.

  Thanks,

Matt

Thanks and regards,

Danyang





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

http://www.caam.rice.edu/~mk51/ <http://www.caam.rice.edu/%7Emk51/>




[petsc-users] Question on incomplete factorization level and fill

2017-05-24 Thread Danyang Su

Dear All,

I use PCFactorSetLevels for ILU and PCFactorSetFill for other 
preconditioning in my code to help solve the problems that the default 
option is hard to solve. However, I found the latter one, 
PCFactorSetFill does not take effect for my problem. The matrices and 
rhs as well as the solutions are attached from the link below. I obtain 
the solution using hypre preconditioner and it takes 7 and 38 iterations 
for matrix 1 and matrix 2. However, if I use other preconditioner, the 
solver just failed at the first matrix. I have tested this matrix using 
the native sequential solver (not PETSc) with ILU preconditioning. If I 
set the incomplete factorization level to 0, this sequential solver will 
take more than 100 iterations. If I increase the factorization level to 
1 or more, it just takes several iterations. This remind me that the PC 
factor for this matrices should be increased. However, when I tried it 
in PETSc, it just does not work.


Matrix and rhs can be obtained from the link below.

https://eilinator.eos.ubc.ca:8443/index.php/s/CalUcq9CMeblk4R

Would anyone help to check if you can make this work by increasing the 
PC factor level or fill?


Thanks and regards,

Danyang




Re: [petsc-users] SuperLU convergence problem (More test)

2015-12-08 Thread Danyang Su

Hi Hong,

Sorry to bother you again. The modified code works much better than 
before using both superlu or mumps. However, it still encounters 
failure. The case is similar with the previous one, ill-conditioned 
matrices.


The code crashed after a long time simulation if I use superlu_dist, but 
will not fail if use superlu. I restart the simulation before the time 
it crashes and can reproduce the following error


 timestep:  22 time: 1.750E+04 years   delt: 2.500E+00 years iter:  
1 max.sia: 5.053E-03 tol.sia: 1.000E-01

 Newton Iteration Convergence Summary:
 Newton   maximum  maximum solver
 iterationupdatePa  updateTempresidualiterations maxvolpa   
maxvoltemp   nexvolpa   nexvoltemp
 1   0.1531E+08 0.1755E+04   0.6920E-051 5585  
440258145814


*** Error in `../program_test': malloc(): memory corruption: 
0x03a70d50 ***

Program received signal SIGABRT: Process abort signal.
Backtrace for this error:

The solver failed at timestep 22, Newton iteration 2. I exported the 
matrices at timestep 1 (matrix 1) and timestep 22 (matrix 140 and 141). 
Matrix 141 is where it failed.  The three matrices here are not 
ill-conditioned form the estimated value.


I did the same using the new modified ex52f code and found pretty 
different results for matrix 141. The norm by superlu is much acceptable 
than superlu_dist. In this test, memory corruption was not detected. The 
codes and example data can be download from the link below.


https://www.dropbox.com/s/i1ls0bg0vt7gu0v/petsc-superlu-test2.tar.gz?dl=0


More test on matrix_and_rhs_bin2***
mpiexec.hydra -n 1 ./ex52f -f0 ./matrix_and_rhs_bin2/a_flow_check_1.bin 
-rhs ./matrix_and_rhs_bin2/b_flow_check_1.bin -loop_matrices flow_check 
-loop_folder ./matrix_and_rhs_bin2 -matrix_index_start 140 
-matrix_index_end 141  -pc_type lu -pc_factor_mat_solver_package superlu 
-ksp_monitor_true_residual -mat_superlu_conditionnumber

 -->loac matrix a
 -->load rhs b
 size l,m,n,mm   9   9   9   9
  Recip. condition number = 6.000846e-16
  0 KSP preconditioned resid norm 1.146871454377e+08 true resid norm 
4.711091037809e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 2.071118508260e-06 true resid norm 
3.363767171515e-08 ||r(i)||/||b|| 7.140102249181e-12

Norm of error  3.3638E-08 iterations 1
 -->Test for matrix  140
  Recip. condition number = 2.256434e-27
  0 KSP preconditioned resid norm 2.084372893355e+14 true resid norm 
4.711091037809e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 4.689629276419e+00 true resid norm 
1.037236635337e-01 ||r(i)||/||b|| 2.201690918330e-05

Norm of error  1.0372E-01 iterations 1
 -->Test for matrix  141
  Recip. condition number = 1.256452e-18
  0 KSP preconditioned resid norm 1.055488964519e+08 true resid norm 
4.711091037809e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 2.998827511681e-04 true resid norm 
4.805214542776e-04 ||r(i)||/||b|| 1.019979130994e-07

Norm of error  4.8052E-04 iterations 1
 --> End of test, bye


mpiexec.hydra -n 1 ./ex52f -f0 ./matrix_and_rhs_bin2/a_flow_check_1.bin 
-rhs ./matrix_and_rhs_bin2/b_flow_check_1.bin -loop_matrices flow_check 
-loop_folder ./matrix_and_rhs_bin2 -matrix_index_start 140 
-matrix_index_end 141  -pc_type lu -pc_factor_mat_solver_package 
superlu_dist

 -->loac matrix a
 -->load rhs b
 size l,m,n,mm   9   9   9   9
Norm of error  3.6752E-08 iterations 1
 -->Test for matrix  140
Norm of error  1.6335E-01 iterations 1
 -->Test for matrix  141
Norm of error  3.4345E+01 iterations 1
 --> End of test, bye

Thanks,

Danyang

On 15-12-07 12:01 PM, Hong wrote:

Danyang:
Add 'call MatSetFromOptions(A,ierr)' to your code.
Attached below is ex52f.F modified from your ex52f.F to be compatible 
with petsc-dev.


Hong

Hello Hong,

Thanks for the quick reply and the option "-mat_superlu_dist_fact
SamePattern" works like a charm, if I use this option from the
command line.

How can I add this option as the default. I tried using
PetscOptionsInsertString("-mat_superlu_dist_fact
SamePattern",ierr) in my code but this does not work.

Thanks,

Danyang


On 15-12-07 10:42 AM, Hong wrote:

Danyang :

Adding '-mat_superlu_dist_fact SamePattern' fixed the problem.
Below is how I figured it out.

1. Reading ex52f.F, I see '-superlu_default' =
'-pc_factor_mat_solver_package superlu_dist', the later enables
runtime options for other packages. I use superlu_dist-4.2 and
superlu-4.1 for the tests below.

2. Use the Matrix 168 to setup KSP solver and factorization, all
packages, petsc, superlu_dist and mumps give same correct results:

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin 

Re: [petsc-users] SuperLU convergence problem (More test)

2015-12-08 Thread Danyang Su

Hi Hong,

Thanks for checking this. A mechanical model was added at the time when 
the solver failed, causing some problem. We need to improve this part in 
the code.


Thanks again and best wishes,

Danyang

On 15-12-08 08:10 PM, Hong wrote:

Danyang :
Your matrices are ill-conditioned, numerically singular with
Recip. condition number = 6.000846e-16
Recip. condition number = 2.256434e-27
Recip. condition number = 1.256452e-18
i.e., condition numbers = O(1.e16 - 1.e27), there is no accuracy in 
computed solution.


I checked your matrix  168 - 172, got Recip. condition number 
= 1.548816e-12.


You need check your model to understand why the matrices are so 
ill-conditioned.


Hong

Hi Hong,

Sorry to bother you again. The modified code works much better
than before using both superlu or mumps. However, it still
encounters failure. The case is similar with the previous one,
ill-conditioned matrices.

The code crashed after a long time simulation if I use
superlu_dist, but will not fail if use superlu. I restart the
simulation before the time it crashes and can reproduce the
following error

 timestep:  22 time: 1.750E+04 years   delt: 2.500E+00 years  
iter:  1 max.sia: 5.053E-03 tol.sia: 1.000E-01

 Newton Iteration Convergence Summary:
 Newton   maximum  maximum solver
 iterationupdatePa  updateTempresidual iterations 
maxvolpa   maxvoltemp   nexvolpa nexvoltemp
 1   0.1531E+08 0.1755E+04   0.6920E-05 1
5585  440258145814


*** Error in `../program_test': malloc(): memory corruption:
0x03a70d50 ***
Program received signal SIGABRT: Process abort signal.
Backtrace for this error:

The solver failed at timestep 22, Newton iteration 2. I exported
the matrices at timestep 1 (matrix 1) and timestep 22 (matrix 140
and 141). Matrix 141 is where it failed.  The three matrices here
are not ill-conditioned form the estimated value.

I did the same using the new modified ex52f code and found pretty
different results for matrix 141. The norm by superlu is much
acceptable than superlu_dist. In this test, memory corruption was
not detected. The codes and example data can be download from the
link below.

https://www.dropbox.com/s/i1ls0bg0vt7gu0v/petsc-superlu-test2.tar.gz?dl=0


More test on matrix_and_rhs_bin2***
mpiexec.hydra -n 1 ./ex52f -f0
./matrix_and_rhs_bin2/a_flow_check_1.bin -rhs
./matrix_and_rhs_bin2/b_flow_check_1.bin -loop_matrices flow_check
-loop_folder ./matrix_and_rhs_bin2 -matrix_index_start 140
-matrix_index_end 141  -pc_type lu -pc_factor_mat_solver_package
superlu -ksp_monitor_true_residual -mat_superlu_conditionnumber
 -->loac matrix a
 -->load rhs b
 size l,m,n,mm   9   9 9   9
  Recip. condition number = 6.000846e-16
  0 KSP preconditioned resid norm 1.146871454377e+08 true resid
norm 4.711091037809e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 2.071118508260e-06 true resid
norm 3.363767171515e-08 ||r(i)||/||b|| 7.140102249181e-12
Norm of error  3.3638E-08 iterations 1
 -->Test for matrix  140
  Recip. condition number = 2.256434e-27
  0 KSP preconditioned resid norm 2.084372893355e+14 true resid
norm 4.711091037809e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 4.689629276419e+00 true resid
norm 1.037236635337e-01 ||r(i)||/||b|| 2.201690918330e-05
Norm of error  1.0372E-01 iterations 1
 -->Test for matrix  141
  Recip. condition number = 1.256452e-18
  0 KSP preconditioned resid norm 1.055488964519e+08 true resid
norm 4.711091037809e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 2.998827511681e-04 true resid
norm 4.805214542776e-04 ||r(i)||/||b|| 1.019979130994e-07
Norm of error  4.8052E-04 iterations 1
 --> End of test, bye


mpiexec.hydra -n 1 ./ex52f -f0
./matrix_and_rhs_bin2/a_flow_check_1.bin -rhs
./matrix_and_rhs_bin2/b_flow_check_1.bin -loop_matrices flow_check
-loop_folder ./matrix_and_rhs_bin2 -matrix_index_start 140
-matrix_index_end 141  -pc_type lu -pc_factor_mat_solver_package
superlu_dist
 -->loac matrix a
 -->load rhs b
 size l,m,n,mm   9   9 9   9
Norm of error  3.6752E-08 iterations 1
 -->Test for matrix  140
Norm of error  1.6335E-01 iterations 1
 -->Test for matrix  141
Norm of error  3.4345E+01 iterations 1
 --> End of test, bye

Thanks,

Danyang

On 15-12-07 12:01 PM, Hong wrote:

Danyang:
Add 'call MatSetFromOptions(A,ierr)' to your code.
Attached below is ex52f.F modified from your ex52f.F to be
compatible with petsc-dev.

Hong

Hello Hong,


Re: [petsc-users] SuperLU convergence problem (More test)

2015-12-07 Thread Danyang Su

Hello Hong,

Thanks for the quick reply and the option "-mat_superlu_dist_fact 
SamePattern" works like a charm, if I use this option from the command 
line.


How can I add this option as the default. I tried using 
PetscOptionsInsertString("-mat_superlu_dist_fact SamePattern",ierr) in 
my code but this does not work.


Thanks,

Danyang

On 15-12-07 10:42 AM, Hong wrote:

Danyang :

Adding '-mat_superlu_dist_fact SamePattern' fixed the problem. Below 
is how I figured it out.


1. Reading ex52f.F, I see '-superlu_default' = 
'-pc_factor_mat_solver_package superlu_dist', the later enables 
runtime options for other packages. I use superlu_dist-4.2 and 
superlu-4.1 for the tests below.


2. Use the Matrix 168 to setup KSP solver and factorization, all 
packages, petsc, superlu_dist and mumps give same correct results:


./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check 
-loop_folder matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package petsc

 -->loac matrix a
 -->load rhs b
 size l,m,n,mm   9 9   9   9
Norm of error  7.7308E-11 iterations 1
 -->Test for matrix  168
..
 -->Test for matrix  172
Norm of error  3.8461E-11 iterations 1

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check 
-loop_folder matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu_dist

Norm of error  9.4073E-11 iterations 1
 -->Test for matrix  168
...
 -->Test for matrix  172
Norm of error  3.8187E-11 iterations 1

3. Use superlu, I get
./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check 
-loop_folder matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu

Norm of error  1.0191E-06 iterations 1
 -->Test for matrix  168
...
 -->Test for matrix  172
Norm of error  9.7858E-07 iterations 1

Replacing default DiagPivotThresh: 1. to 0.0, I get same solutions as 
other packages:


./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check 
-loop_folder matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu -mat_superlu_diagpivotthresh 0.0


Norm of error  8.3614E-11 iterations 1
 -->Test for matrix  168
...
 -->Test for matrix  172
Norm of error  3.7098E-11 iterations 1

4.
using '-mat_view ascii::ascii_info', I found that a_flow_check_1.bin 
and a_flow_check_168.bin seem have same structure:


 -->loac matrix a
Mat Object: 1 MPI processes
  type: seqaij
  rows=9, cols=9
  total: nonzeros=895600, allocated nonzeros=895600
  total number of mallocs used during MatSetValues calls =0
using I-node routines: found 45000 nodes, limit used is 5

5.
Using a_flow_check_1.bin, I am able to reproduce the error you 
reported: all packages give correct results except superlu_dist:
./ex52f -f0 matrix_and_rhs_bin/a_flow_check_1.bin -rhs 
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check 
-loop_folder matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu_dist

Norm of error  2.5970E-12 iterations 1
 -->Test for matrix  168
Norm of error  1.3936E-01 iterations34
 -->Test for matrix  169

I guess the error might come from reuse of matrix factor. Replacing 
default

-mat_superlu_dist_fact  with
-mat_superlu_dist_fact SamePattern, I get

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_1.bin -rhs 
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check 
-loop_folder matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu_dist -mat_superlu_dist_fact 
SamePattern


Norm of error  2.5970E-12 iterations 1
 -->Test for matrix  168
Norm of error  9.4073E-11 iterations 1
 -->Test for matrix  169
Norm of error  6.4303E-11 iterations 1
 -->Test for matrix  170
Norm of error  7.4327E-11 iterations 1
 -->Test for matrix  171
Norm of error  5.4162E-11 iterations 1
 -->Test for matrix  172
Norm of error  3.4440E-11 iterations 1
 --> End of test, bye

Sherry may tell you why SamePattern_SameRowPerm cause the difference here.
Best on the above experiments, I would set following as default
'-mat_superlu_diagpivotthresh 0.0' in petsc/superlu interface.
'-mat_superlu_dist_fact SamePattern' in petsc/superlu_dist interface.

Hong

Hi Hong,

I did more test today and finally found that the solution accuracy
depends on the initial (first) matrix quality. I modified the
ex52f.F to do the test. There are 6 matrices and right-hand-side
vectors. All these matrices and rhs are from my reactive transport
simulation. Results will be quite different depending on which one
you use to do factorization. Results will also be different if you
run with different options. My code is similar to the First or the
Second test below. When the matrix is 

Re: [petsc-users] SuperLU convergence problem (More test)

2015-12-07 Thread Danyang Su
Thank. The inserted options works now. I didn't put 
PetscOptionsInsertString in the right place before.


Danyang

On 15-12-07 12:01 PM, Hong wrote:

Danyang:
Add 'call MatSetFromOptions(A,ierr)' to your code.
Attached below is ex52f.F modified from your ex52f.F to be compatible 
with petsc-dev.


Hong

Hello Hong,

Thanks for the quick reply and the option "-mat_superlu_dist_fact
SamePattern" works like a charm, if I use this option from the
command line.

How can I add this option as the default. I tried using
PetscOptionsInsertString("-mat_superlu_dist_fact
SamePattern",ierr) in my code but this does not work.

Thanks,

Danyang


On 15-12-07 10:42 AM, Hong wrote:

Danyang :

Adding '-mat_superlu_dist_fact SamePattern' fixed the problem.
Below is how I figured it out.

1. Reading ex52f.F, I see '-superlu_default' =
'-pc_factor_mat_solver_package superlu_dist', the later enables
runtime options for other packages. I use superlu_dist-4.2 and
superlu-4.1 for the tests below.

2. Use the Matrix 168 to setup KSP solver and factorization, all
packages, petsc, superlu_dist and mumps give same correct results:

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu
-pc_factor_mat_solver_package petsc
 -->loac matrix a
 -->load rhs b
 size l,m,n,mm   9   9   9 9
Norm of error  7.7308E-11 iterations 1
 -->Test for matrix  168
..
 -->Test for matrix  172
Norm of error  3.8461E-11 iterations 1

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu
-pc_factor_mat_solver_package superlu_dist
Norm of error  9.4073E-11 iterations 1
 -->Test for matrix  168
...
 -->Test for matrix  172
Norm of error  3.8187E-11 iterations 1

3. Use superlu, I get
./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu
-pc_factor_mat_solver_package superlu
Norm of error  1.0191E-06 iterations 1
 -->Test for matrix  168
...
 -->Test for matrix  172
Norm of error  9.7858E-07 iterations 1

Replacing default DiagPivotThresh: 1. to 0.0, I get same
solutions as other packages:

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_168.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu
-pc_factor_mat_solver_package superlu
-mat_superlu_diagpivotthresh 0.0

Norm of error  8.3614E-11 iterations 1
 -->Test for matrix  168
...
 -->Test for matrix  172
Norm of error  3.7098E-11 iterations 1

4.
using '-mat_view ascii::ascii_info', I found that
a_flow_check_1.bin and a_flow_check_168.bin seem have same structure:

 -->loac matrix a
Mat Object: 1 MPI processes
  type: seqaij
  rows=9, cols=9
  total: nonzeros=895600, allocated nonzeros=895600
  total number of mallocs used during MatSetValues calls =0
using I-node routines: found 45000 nodes, limit used is 5

5.
Using a_flow_check_1.bin, I am able to reproduce the error you
reported: all packages give correct results except superlu_dist:
./ex52f -f0 matrix_and_rhs_bin/a_flow_check_1.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu
-pc_factor_mat_solver_package superlu_dist
Norm of error  2.5970E-12 iterations 1
 -->Test for matrix  168
Norm of error  1.3936E-01 iterations34
 -->Test for matrix  169

I guess the error might come from reuse of matrix factor.
Replacing default
-mat_superlu_dist_fact  with
-mat_superlu_dist_fact SamePattern, I get

./ex52f -f0 matrix_and_rhs_bin/a_flow_check_1.bin -rhs
matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices flow_check
-loop_folder matrix_and_rhs_bin -pc_type lu
-pc_factor_mat_solver_package superlu_dist -mat_superlu_dist_fact
SamePattern

Norm of error  2.5970E-12 iterations 1
 -->Test for matrix  168
Norm of error  9.4073E-11 iterations 1
 -->Test for matrix  169
Norm of error  6.4303E-11 iterations 1
 -->Test for matrix  170
Norm of error  7.4327E-11 iterations 1
 -->Test for matrix  171
Norm of error  5.4162E-11 iterations 1
 -->Test for matrix  172
Norm of error  3.4440E-11 iterations 1
 --> End of test, bye

Sherry may tell you why SamePattern_SameRowPerm cause the
   

Re: [petsc-users] SuperLU convergence problem (More test)

2015-12-05 Thread Danyang Su

Hi Hong,

I did more test today and finally found that the solution accuracy 
depends on the initial (first) matrix quality. I modified the ex52f.F to 
do the test. There are 6 matrices and right-hand-side vectors. All these 
matrices and rhs are from my reactive transport simulation. Results will 
be quite different depending on which one you use to do factorization. 
Results will also be different if you run with different options. My 
code is similar to the First or the Second test below. When the matrix 
is well conditioned, it works fine. But if the initial matrix is well 
conditioned, it likely to crash when the matrix become ill-conditioned. 
Since most of my case are well conditioned so I didn't detect the 
problem before. This case is a special one.



How can I avoid this problem? Shall I redo factorization? Can PETSc 
automatically detect this prolbem or is there any option available to do 
this?


All the data and test code (modified ex52f) can be found via the dropbox 
link below.

_
__https://www.dropbox.com/s/4al1a60creogd8m/petsc-superlu-test.tar.gz?dl=0_


Summary of my test is shown below.

First, use the Matrix 1 to setup KSP solver and factorization, then 
solve 168 to 172


mpiexec.hydra -n 1 ./ex52f -f0 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/a_flow_check_1.bin 
-rhs 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/b_flow_check_1.bin 
-loop_matrices flow_check -loop_folder 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu_dist


Norm of error  3.8815E-11 iterations 1
 -->Test for matrix  168
Norm of error  4.2307E-01 iterations32
 -->Test for matrix  169
Norm of error  3.0528E-01 iterations32
 -->Test for matrix  170
Norm of error  3.1177E-01 iterations32
 -->Test for matrix  171
Norm of error  3.2793E-01 iterations32
 -->Test for matrix  172
Norm of error  3.1251E-01 iterations31

Second, use the Matrix 1 to setup KSP solver and factorization using the 
implemented SuperLU relative codes. I thought this will generate the 
same results as the First test, but it actually not.


mpiexec.hydra -n 1 ./ex52f -f0 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/a_flow_check_1.bin 
-rhs 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/b_flow_check_1.bin 
-loop_matrices flow_check -loop_folder 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin -superlu_default


Norm of error  2.2632E-12 iterations 1
 -->Test for matrix  168
Norm of error  1.0817E+04 iterations 1
 -->Test for matrix  169
Norm of error  1.0786E+04 iterations 1
 -->Test for matrix  170
Norm of error  1.0792E+04 iterations 1
 -->Test for matrix  171
Norm of error  1.0792E+04 iterations 1
 -->Test for matrix  172
Norm of error  1.0792E+04 iterations 1


Third, use the Matrix 168 to setup KSP solver and factorization, then 
solve 168 to 172


mpiexec.hydra -n 1 ./ex52f -f0 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices 
flow_check -loop_folder 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin -pc_type lu 
-pc_factor_mat_solver_package superlu_dist


Norm of error  9.5528E-10 iterations 1
 -->Test for matrix  168
Norm of error  9.4945E-10 iterations 1
 -->Test for matrix  169
Norm of error  6.4279E-10 iterations 1
 -->Test for matrix  170
Norm of error  7.4633E-10 iterations 1
 -->Test for matrix  171
Norm of error  7.4863E-10 iterations 1
 -->Test for matrix  172
Norm of error  8.9701E-10 iterations 1

Fourth, use the Matrix 168 to setup KSP solver and factorization using 
the implemented SuperLU relative codes. I thought this will generate the 
same results as the Third test, but it actually not.


mpiexec.hydra -n 1 ./ex52f -f0 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin/b_flow_check_168.bin -loop_matrices 
flow_check -loop_folder 
/home/dsu/work/petsc-superlu-test/matrix_and_rhs_bin -superlu_default


Norm of error  3.7017E-11 iterations 1
 -->Test for matrix  168
Norm of error  3.6420E-11 iterations 1
 -->Test for matrix  169
Norm of error  3.7184E-11 iterations 1
 -->Test for matrix  170
Norm of error  3.6847E-11 iterations 1
 -->Test for matrix  171
Norm of error  3.7883E-11 iterations 1
 -->Test for matrix  172
Norm of error  3.8805E-11 iterations 1

Thanks very much,

Danyang

On 15-12-03 01:59 PM, Hong wrote:

Danyang :
Further testing a_flow_check_168.bin,
./ex10 -f0 
/Users/Hong/Downloads/matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
/Users/Hong/Downloads/matrix_and_rhs_bin/x_flow_check_168.bin -pc_type 
lu -pc_factor_mat_solver_package superlu 

Re: [petsc-users] SuperLU convergence problem

2015-12-03 Thread Danyang Su

Hi Hong,

I just checked using ex10 for these matrices and rhs, they all work 
fine. I found something is wrong in my code when using direct solver.  
The second parameter mat in PCFactorGetMatrix(PC pc,Mat *mat) is not 
initialized in my code for SUPERLU or MUMPS.


I will fix this bug, rerun the tests and get back to you later.

Thanks very much,

Danyang

On 15-12-03 01:59 PM, Hong wrote:

Danyang :
Further testing a_flow_check_168.bin,
./ex10 -f0 
/Users/Hong/Downloads/matrix_and_rhs_bin/a_flow_check_168.bin -rhs 
/Users/Hong/Downloads/matrix_and_rhs_bin/x_flow_check_168.bin -pc_type 
lu -pc_factor_mat_solver_package superlu -ksp_monitor_true_residual 
-mat_superlu_conditionnumber

  Recip. condition number = 1.610480e-12
  0 KSP preconditioned resid norm 6.873340313547e+09 true resid norm 
7.295020990196e+03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 2.051833296449e-02 true resid norm 
2.976859070118e-02 ||r(i)||/||b|| 4.080672384793e-06

Number of iterations =   1
Residual norm 0.0297686

condition number of this matrix = 1/1.610480e-12 = 1.e+12,
i.e., this matrix is ill-conditioned.

Hong


Hi Hong,

The binary format of matrix, rhs and solution can be downloaded
via the link below.

https://www.dropbox.com/s/cl3gfi0s0kjlktf/matrix_and_rhs_bin.tar.gz?dl=0

Thanks,

Danyang


On 15-12-03 10:50 AM, Hong wrote:

Danyang:



To my surprising, solutions from SuperLU at timestep 29 seems
not correct for the first 4 Newton iterations, but the
solutions from iteration solver and MUMPS are correct.

Please find all the matrices, rhs and solutions at timestep
29 via the link below. The data is a bit large so that I just
share it through Dropbox. A piece of matlab code to read
these data and then computer the norm has also been attached.
_https://www.dropbox.com/s/rr8ueysgflmxs7h/results-check.tar.gz?dl=0_


Can you send us matrix in petsc binary format?

e.g., call MatView(M, PETSC_VIEWER_BINARY_(PETSC_COMM_WORLD))
or '-ksp_view_mat binary'

Hong



Below is a summary of the norm from the three solvers at
timestep 29, newton iteration 1 to 5.

Timestep 29
Norm of residual seq 1.661321e-09, superlu 1.657103e+04,
mumps 3.731225e-11
Norm of residual seq 1.753079e-09, superlu 6.675467e+02,
mumps 1.509919e-13
Norm of residual seq 4.914971e-10, superlu 1.236362e-01,
mumps 2.139303e-17
Norm of residual seq 3.532769e-10, superlu 1.304670e-04,
mumps 5.387000e-20
Norm of residual seq 3.885629e-10, superlu 2.754876e-07,
mumps 4.108675e-21

Would anybody please check if SuperLU can solve these
matrices? Another possibility is that something is wrong in
my own code. But so far, I cannot find any problem in my code
since the same code works fine if I using iterative solver or
direct solver MUMPS. But for other cases I have tested,  all
these solvers work fine.

Please let me know if I did not write down the problem clearly.

Thanks,

Danyang











Re: [petsc-users] SuperLU convergence problem

2015-12-03 Thread Danyang Su

Hi Hong,

The binary format of matrix, rhs and solution can be downloaded via the 
link below.


https://www.dropbox.com/s/cl3gfi0s0kjlktf/matrix_and_rhs_bin.tar.gz?dl=0

Thanks,

Danyang

On 15-12-03 10:50 AM, Hong wrote:

Danyang:



To my surprising, solutions from SuperLU at timestep 29 seems not
correct for the first 4 Newton iterations, but the solutions from
iteration solver and MUMPS are correct.

Please find all the matrices, rhs and solutions at timestep 29 via
the link below. The data is a bit large so that I just share it
through Dropbox. A piece of matlab code to read these data and
then computer the norm has also been attached.
_https://www.dropbox.com/s/rr8ueysgflmxs7h/results-check.tar.gz?dl=0_


Can you send us matrix in petsc binary format?

e.g., call MatView(M, PETSC_VIEWER_BINARY_(PETSC_COMM_WORLD))
or '-ksp_view_mat binary'

Hong



Below is a summary of the norm from the three solvers at timestep
29, newton iteration 1 to 5.

Timestep 29
Norm of residual seq 1.661321e-09, superlu 1.657103e+04, mumps
3.731225e-11
Norm of residual seq 1.753079e-09, superlu 6.675467e+02, mumps
1.509919e-13
Norm of residual seq 4.914971e-10, superlu 1.236362e-01, mumps
2.139303e-17
Norm of residual seq 3.532769e-10, superlu 1.304670e-04, mumps
5.387000e-20
Norm of residual seq 3.885629e-10, superlu 2.754876e-07, mumps
4.108675e-21

Would anybody please check if SuperLU can solve these matrices?
Another possibility is that something is wrong in my own code. But
so far, I cannot find any problem in my code since the same code
works fine if I using iterative solver or direct solver MUMPS. But
for other cases I have tested,  all these solvers work fine.

Please let me know if I did not write down the problem clearly.

Thanks,

Danyang








Re: [petsc-users] Error reported by MUMPS in numerical factorization phase

2015-12-02 Thread Danyang Su

Hi Hong,

It's not easy to run in debugging mode as the cluster does not have 
petsc installed using debug mode. Restart the case from the crashing 
time does not has the problem. So if I want to detect this error, I need 
to start the simulation from beginning which takes hours in the cluster.


Do you mean I need to redo symbolic factorization? For now, I only do 
factorization once at the first timestep and then reuse it. Some of the 
code is shown below.


if (timestep == 1) then
  call PCFactorSetMatSolverPackage(pc_flow,MATSOLVERMUMPS,ierr)
  CHKERRQ(ierr)

  call PCFactorSetUpMatSolverPackage(pc_flow,ierr)
  CHKERRQ(ierr)

  call PCFactorGetMatrix(pc_flow,a_flow_j,ierr)
  CHKERRQ(ierr)
end if

call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
CHKERRQ(ierr)

Thanks,

Danyang

On 15-12-02 08:39 AM, Hong wrote:

Danyang :

My code fails due to the error in external library. It works fine
for the previous 2000+ timesteps but then crashes.

[4]PETSC ERROR: Error in external library
[4]PETSC ERROR: Error reported by MUMPS in numerical factorization
phase: INFO(1)=-1, INFO(2)=0

This simply says an error occurred in proc[0] during numerical 
factorization, which usually either encounter a zeropivot or run out 
of memory. Since it is at a later timesteps, which I guess you reuse 
matrix factor, zeropivot might be the problem.
Is possible to run it in debugging mode? In this way, mumps would dump 
out more information.



Then I tried the same simulation on another machine using the same
number of processors, it does not fail.

Does this machine  have larger memory?

Hong




Re: [petsc-users] Error reported by MUMPS in numerical factorization phase

2015-12-02 Thread Danyang Su

Hi Hong,

Thank. I can test it but it may takes some time to install petsc-dev on 
the cluster. I will try more cases to see if I can get this error on my 
local machine which is much more convenient for me to test in debug 
mode. So far, the error does not occur on my local machine using the 
same code, the same petsc-3.6.2 version, the same case and the same 
number of processors. The system and petsc configuration is different.


Regards,

Danyang

On 15-12-02 10:26 AM, Hong wrote:

Danyang:
It is likely a zero pivot. I'm adding a feature to petsc. When matrix 
factorization fails, computation continues with error information 
stored in

ksp->reason=DIVERGED_PCSETUP_FAILED.
For your timestepping code, you may able to automatically reduce 
timestep and continue your simulation.


Do you want to test it? If so, you need install petsc-dev with my 
branch hzhang/matpackage-erroriffpe on your cluster. We may merge this 
branch to petsc-master soon.



It's not easy to run in debugging mode as the cluster does not
have petsc installed using debug mode. Restart the case from the
crashing time does not has the problem. So if I want to detect
this error, I need to start the simulation from beginning which
takes hours in the cluster.


This is why we are adding this new feature.


Do you mean I need to redo symbolic factorization? For now, I only
do factorization once at the first timestep and then reuse it.
Some of the code is shown below.

if (timestep == 1) then
  call
PCFactorSetMatSolverPackage(pc_flow,MATSOLVERMUMPS,ierr)
  CHKERRQ(ierr)

  call PCFactorSetUpMatSolverPackage(pc_flow,ierr)
  CHKERRQ(ierr)

  call PCFactorGetMatrix(pc_flow,a_flow_j,ierr)
  CHKERRQ(ierr)
end if

call KSPSolve(ksp_flow,b_flow,x_flow,ierr)
CHKERRQ(ierr)


I do not think you need to change this part of code.
Does you code check convergence at each time step?

Hong



On 15-12-02 08:39 AM, Hong wrote:

Danyang :

My code fails due to the error in external library. It works
fine for the previous 2000+ timesteps but then crashes.

[4]PETSC ERROR: Error in external library
[4]PETSC ERROR: Error reported by MUMPS in numerical
factorization phase: INFO(1)=-1, INFO(2)=0

This simply says an error occurred in proc[0] during numerical
factorization, which usually either encounter a zeropivot or run
out of memory. Since it is at a later timesteps, which I guess
you reuse matrix factor, zeropivot might be the problem.
Is possible to run it in debugging mode? In this way, mumps would
dump out more information.


Then I tried the same simulation on another machine using the
same number of processors, it does not fail.

Does this machine  have larger memory?

Hong







[petsc-users] Error reported by MUMPS in numerical factorization phase

2015-12-01 Thread Danyang Su

Hi All,

My code fails due to the error in external library. It works fine for 
the previous 2000+ timesteps but then crashes.


[4]PETSC ERROR: Error in external library
[4]PETSC ERROR: Error reported by MUMPS in numerical factorization 
phase: INFO(1)=-1, INFO(2)=0


The full error message is attached.

Then I tried the same simulation on another machine using the same 
number of processors, it does not fail.


Is there anything wrong with the configuration? or would it be possible 
if anything is wrong in my code?


Thanks,

Danyang
[4]PETSC ERROR: - Error Message 
--
[4]PETSC ERROR: Error in external library
[4]PETSC ERROR: Error reported by MUMPS in numerical factorization phase: 
INFO(1)=-1, INFO(2)=0

[4]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[4]PETSC ERROR: Petsc Release Version 3.6.2, Oct, 02, 2015 
[4]PETSC ERROR: ../min3p_thcm on a arch-linux2-c-debug named pod26b15 by 
danyangs Tue Dec  1 11:29:00 2015
[4]PETSC ERROR: Configure options 
--prefix=/global/software/lib64/intel/petsc-3.6.2 --with-64-bit-pointers=0 
--with-pthread=0 --with-pthreadclasses=0 
--with-cc=/global/software/openmpi-1.6.5/intel/bin/mpicc --CFLAGS="-O3 
-axSSE4.2,SSE4.1 -xSSSE3  -I/global/software/intel/composerxe/mkl/include 
-I/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1/include 
-I/global/software/lib64/intel/petsc-3.6.2/include" 
--with-cxx=/global/software/openmpi-1.6.5/intel/bin/mpicxx --CXXFLAGS="-O3 
-axSSE4.2,SSE4.1 -xSSSE3  -I/global/software/intel/composerxe/mkl/include 
-I/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1/include 
-I/global/software/lib64/intel/petsc-3.6.2/include" 
--with-fc=/global/software/openmpi-1.6.5/intel/bin/mpif90 --FFLAGS="-O3 
-axSSE4.2,SSE4.1 -xSSSE3  -I/global/software/intel/composerxe/mkl/include 
-I/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1/include 
-I/global/software/lib64/intel/petsc-3.6.2/include" --with-cxx-dialect=C++11 
--with-single-library=1 --with-shared-libraries 
--with-shared-ld=/global/software/openmpi-1.6.5/intel-2011/bin/mpicc 
--sharedLibraryFlags="-fpic -fPIC " --with-large-file-io=1 --with-mpi=1 
--with-mpi-shared=1 
--with-mpirun=/global/software/openmpi-1.6.5/intel/bin/mpiexec 
--with-mpi-compilers=1 --with-x=yes 
--with-blas-lapack-dir=/global/software/intel/composerxe/mkl/lib/intel64 
--with-ptscotch=0 --with-x=1 --with-hdf5=1 
--with-hdf5-dir=/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1 
--with-netcdf=0 --with-fftw=1 
--with-fftw-dir=/global/software/lib64/intel/fftw-3.3.3 --download-blacs=yes 
--download-scalapack=yes --download-superlu_dist=yes --download-mumps=yes 
--download-metis=yes --download-parmetis=yes --download-spooles=yes 
--download-cproto=yes --download-suitesparse=yes --download-hypre=yes 
--download-amd=yes --download-adifor=yes --download-euclid=yes 
--download-spai=yes --download-sprng=yes --download-ml=yes --download-boost=yes 
--download-triangle=yes --download-generator=yes --with-boost=1 
--with-petsc4py=0 --with-numpy=1 exit 0
[4]PETSC ERROR: #1 MatFactorNumeric_MUMPS() line 1172 in 
/tmp/petsc-3.6.2/src/mat/impls/aij/mpi/mumps/mumps.c
[5]PETSC ERROR: - Error Message 
--
[5]PETSC ERROR: Error in external library
[5]PETSC ERROR: Error reported by MUMPS in numerical factorization phase: 
INFO(1)=-1, INFO(2)=0

[5]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[5]PETSC ERROR: Petsc Release Version 3.6.2, Oct, 02, 2015 
[5]PETSC ERROR: ../min3p_thcm on a arch-linux2-c-debug named pod26b15 by 
danyangs Tue Dec  1 11:29:00 2015
[5]PETSC ERROR: Configure options 
--prefix=/global/software/lib64/intel/petsc-3.6.2 --with-64-bit-pointers=0 
--with-pthread=0 --with-pthreadclasses=0 
--with-cc=/global/software/openmpi-1.6.5/intel/bin/mpicc --CFLAGS="-O3 
-axSSE4.2,SSE4.1 -xSSSE3  -I/global/software/intel/composerxe/mkl/include 
-I/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1/include 
-I/global/software/lib64/intel/petsc-3.6.2/include" 
--with-cxx=/global/software/openmpi-1.6.5/intel/bin/mpicxx --CXXFLAGS="-O3 
-axSSE4.2,SSE4.1 -xSSSE3  -I/global/software/intel/composerxe/mkl/include 
-I/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1/include 
-I/global/software/lib64/intel/petsc-3.6.2/include" 
--with-fc=/global/software/openmpi-1.6.5/intel/bin/mpif90 --FFLAGS="-O3 
-axSSE4.2,SSE4.1 -xSSSE3  -I/global/software/intel/composerxe/mkl/include 
-I/global/software/lib64/intel/ncsa-tools/hdf5-1.8.15p1/include 
-I/global/software/lib64/intel/petsc-3.6.2/include" --with-cxx-dialect=C++11 
--with-single-library=1 --with-shared-libraries 
--with-shared-ld=/global/software/openmpi-1.6.5/intel-2011/bin/mpicc 
--sharedLibraryFlags="-fpic -fPIC " --with-large-file-io=1 --with-mpi=1 
--with-mpi-shared=1 

Re: [petsc-users] Error after updating to 3.6.0: finclude/petscsys.h: No such file or directory

2015-06-14 Thread Danyang Su

Sorry, I forgot this changes.

 * Fortran include files are now in include/petsc/finclude instead of
   include/finclude. Thus replace uses of #include finclude/xxx.h
   with #include petsc/finclude/xxx.h. Reason for change: to
   namespace the finclude directory with PETSc for --prefix installs of
   PETSc and for packaging systems



On 15-06-14 09:15 PM, Danyang Su wrote:

Hi PETSc User,

I get problem in compiling my codes after updating PETSc to 3.6.0.

The codes work fine using PETSc 3.5.3 and PETSc-dev.

I have made modified include lines in makefile from

#PETSc variables for V3.5.3 and previous version
#include ${PETSC_DIR}/conf/variables
#include ${PETSC_DIR}/conf/rules

to

#PETSc variables for development version, version V3.6.0 and later
include ${PETSC_DIR}/lib/petsc/conf/variables
include ${PETSC_DIR}/lib/petsc/conf/rules

but I got the error

fatal error: finclude/petscsys.h: No such file or directory
 #include finclude/petscsys.h
 ^
compilation terminated.

The configure log is attached.

Thanks and regards,

Danyang




Re: [petsc-users] Is matrix analysis available in PETSc or external package?

2015-05-12 Thread Danyang Su

On 15-05-11 07:19 PM, Hong wrote:

Danyang:


I recently have some time-dependent cases that have difficulty in
convergence. It needs a lot of linear iterations during a specific
time, e.g., more than 100 linear iterations for every newton
iteration. In PETSc parallel version, this number will be doubled
or even more. Our case usually needs less than 10 iterations for
every newton iteration. This is NOT caused by PETSc relative
routines as the original sequential solver (ILU) also has the same
problem.

The results seems reasonable, but the large number of iterations
leads me to suspect that something is not well implemented in a
particular module. This module is developed a long time ago by
another guy and it sounds impossible for me to check every line
(thousands of lines) and do theoretical analysis.Does PETSc or
external package provide matrix analysis to help to locate the
'suspected' entry?

We need know what solvers being used. My guess is the default
gmres/bjacobi/ilu(0). Please run your code with option '-ts_view' or 
'-snes_view' to find out.

Hi Hong,

Sorry for the late reply. I guess there is nothing wrong in the solver 
or data structure, but maybe some large entries in the linear system 
that cause the problem hard to solve. I will export the matrix to check 
these values.


Thanks,

Danyang


Hong





Re: [petsc-users] Is matrix analysis available in PETSc or external package?

2015-05-12 Thread Danyang Su

On 15-05-12 11:13 AM, Barry Smith wrote:

On May 11, 2015, at 7:10 PM, Danyang Su danyang...@gmail.com wrote:

Hi All,

I recently have some time-dependent cases that have difficulty in convergence. 
It needs a lot of linear iterations during a specific time, e.g., more than 100 
linear iterations for every newton iteration. In PETSc parallel version, this 
number will be doubled or even more. Our case usually needs less than 10 
iterations for every newton iteration. This is NOT caused by PETSc relative 
routines as the original sequential solver (ILU) also has the same problem.

The results seems reasonable, but the large number of iterations leads me to 
suspect that something is not well implemented in a particular module. This 
module is developed a long time ago by another guy and it sounds impossible for 
me to check every line (thousands of lines) and do theoretical analysis.Does 
PETSc or external package provide matrix analysis to help to locate the 
'suspected' entry?

What do you mean by 'matrix analaysis'?   You can run with -ksp_view_mat 
binary to have all the matrices saved in a binary file called binaryoutput 
which you can read into MATLAB (with 
$PETSC_DIR/shared/petsc/matlab/PetscBinaryRead.m) or python (with 
bin/PetscBinaryIO.py). You can then look at the matrices, determine the largest 
entry whatever you wish.

But it is also very possible that the due to changes in the model you are 
evolving in time the linear system simple does get more ill-conditioned during 
part of the simulation. This is pretty common and need not indicate that there 
is anything wrong with the model or the code.

   Barry

Hi Barry,

Here 'matrix analysis' means some extreme large entries in the matrix 
that causes the linear solver difficult to converge. Since I am not 
pretty sure if anything is wrong with the code or not, I just want to 
check the location of these entries and how it is calculated. This is a 
complex reactive transport problem so it is possible that nothing is 
wrong but the system is more ill conditioned in as specific simulation 
time.


I also have the matrix output to matrix market format that I can use to 
analysis.


Thanks,

Danyang



Thanks and regards,

Danyang




[petsc-users] Is matrix analysis available in PETSc or external package?

2015-05-11 Thread Danyang Su

Hi All,

I recently have some time-dependent cases that have difficulty in 
convergence. It needs a lot of linear iterations during a specific time, 
e.g., more than 100 linear iterations for every newton iteration. In 
PETSc parallel version, this number will be doubled or even more. Our 
case usually needs less than 10 iterations for every newton iteration. 
This is NOT caused by PETSc relative routines as the original sequential 
solver (ILU) also has the same problem.


The results seems reasonable, but the large number of iterations leads 
me to suspect that something is not well implemented in a particular 
module. This module is developed a long time ago by another guy and it 
sounds impossible for me to check every line (thousands of lines) and do 
theoretical analysis.Does PETSc or external package provide matrix 
analysis to help to locate the 'suspected' entry?


Thanks and regards,

Danyang


  1   2   >