Re: [petsc-users] Could not execute "['git', 'rev-parse', '--git-dir']"

2018-03-07 Thread Kong, Fande
[kongf@falcon1 git.hypre]$ git rev-parse  --git-dir
.git
[kongf@falcon1 git.hypre]$ echo $?
0
[kongf@falcon1 git.hypre]$
[kongf@falcon1 git.hypre]$ git fsck
Checking object directories: 100% (256/256), done.
Checking objects: 100% (22710/22710), done.
[kongf@falcon1 git.hypre]$



But the same  error still persists!


Fande,

On Wed, Mar 7, 2018 at 12:33 PM, Satish Balay  wrote:

> 
> balay@asterix /home/balay
> $ git rev-parse --git-dir
> fatal: Not a git repository (or any parent up to mount point /)
> Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
> balay@asterix /home/balay
> $ echo $?
> 128
> balay@asterix /home/balay
> $ cd petsc
> balay@asterix /home/balay/petsc (next *=)
> $ git rev-parse --git-dir
> .git
> balay@asterix /home/balay/petsc (next *=)
> $ echo $?
> 0
> balay@asterix /home/balay/petsc (next *=)
> $
> <
>
> So - for some reason, git is an error in /home/kongf/workhome/projects/
> petsc-3.7.7/arch-linux2-c-opt-64bit/externalpackages/git.hypre
>
> You can try:
>
> cd /home/kongf/workhome/projects/petsc-3.7.7/arch-linux2-c-opt-
> 64bit/externalpackages/git.hypre
> git rev-parse --git-dir
> echo $?
> git fsck
>
> If this works - you can rerun configure and see if the error persists
>
> Satish
>
>
> On Wed, 7 Mar 2018, Kong, Fande wrote:
>
> > Hi PETSc team:
> >
> > What is the possible reason for this?
> >
> > The log file is attached.
> >
> >
> > Fande,
> >
>
>


Re: [petsc-users] How to change optimization flags?

2018-03-07 Thread Satish Balay
COPTFLAGS is meant for optimization flags but CFLAGS should also work.
However since each of them have their own defaults - you would use the
correct one to override the defaults.

If the option you specify is not picked up - mostlikely petsc
configure attempted to use it - and configure rejected it.

You'll have to send us configure.log to figureout why they were rejected.

Satish

On Wed, 7 Mar 2018, Lucas Clemente Vella wrote:

> Despite documentation and output from "./configure --help", PETSC
> disregards completely all of the below:
> 
> --CFLAGS=
> --CXXFLAGS=
> --FFLAGS=
> --COPTFLAGS=
> --FOPTFLAGS=
> --CXXOPTFLAGS=
> 
> And also the customary environment variables CFLAGS and CXXFLAGS.
> 
> How can I set special optimization flags, like enabling of SIMD
> instructions?
> 
> 



Re: [petsc-users] Could not execute "['git', 'rev-parse', '--git-dir']"

2018-03-07 Thread Satish Balay

balay@asterix /home/balay
$ git rev-parse --git-dir
fatal: Not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
balay@asterix /home/balay
$ echo $?
128
balay@asterix /home/balay
$ cd petsc
balay@asterix /home/balay/petsc (next *=)
$ git rev-parse --git-dir
.git
balay@asterix /home/balay/petsc (next *=)
$ echo $?
0
balay@asterix /home/balay/petsc (next *=)
$ 
<

So - for some reason, git is an error in 
/home/kongf/workhome/projects/petsc-3.7.7/arch-linux2-c-opt-64bit/externalpackages/git.hypre

You can try:

cd 
/home/kongf/workhome/projects/petsc-3.7.7/arch-linux2-c-opt-64bit/externalpackages/git.hypre
git rev-parse --git-dir
echo $?
git fsck

If this works - you can rerun configure and see if the error persists

Satish


On Wed, 7 Mar 2018, Kong, Fande wrote:

> Hi PETSc team:
> 
> What is the possible reason for this?
> 
> The log file is attached.
> 
> 
> Fande,
> 



[petsc-users] No VecGetLocalSubVector?

2018-03-07 Thread Garth Wells
Is there a reason why there is no 'VecGetLocalSubVector' to match
MatGetLocalSubMatrix?

I've used VecGetSubVector, but can't use local indices with it, e.g.
can't use VecSetValuesLocal.

Garth

Re: [petsc-users] [petsc-maint] how to check if cell is local owned in DMPlex

2018-03-07 Thread Danyang Su

Hi All,

Thanks again for all your help during my code development and it turns 
out the DMPlex for unstructured grid works pretty well. I have another 
question.


 Is there any alternative method to get number of leaves without using 
PetscSFGetGraph?


Based on my test, this function works fine using current PETSc-dev 
version, but I cannot get it compiled correctly using other versions for 
Fortran code, as mentioned in the previous emails. I asked this question 
because some of the clusters we use do not have the PETSc-dev version 
and it takes time get staff reinstall another version.


Thanks,

Danyang

On 18-03-05 11:50 AM, Smith, Barry F. wrote:

MatSolverPackage

became MatSolverType




On Mar 5, 2018, at 1:35 PM, Danyang Su  wrote:

Hi Barry and Matt,

The compiling problem should be caused by the PETSc version installed on my 
computer. When updated to PETSc-Dev version, the ex1f example works fine. 
However, I cannot compile this example under PETSc-3.8.3 version.

After updating to PETSc-Dev verison, I encounter another compiling problem in 
my code.

 MatSolverPackage :: solver_pkg_flow
 1
Error: Unclassifiable statement at (1)

Including petscmat.h or petscpc.h does not help to solve this problem. I can 
rewrite this part to get rid of this. But I would rather to keep this if there 
is alternative way to go. What is the head file should I include in order to 
use MatSolverPackage?

Thanks,

Danyang


On 18-03-04 11:15 AM, Smith, Barry F. wrote:

   See src/vec/is/sf/examples/tutorials/ex1f.F90 in the master branch of the 
PETSc git repository

   BTW:

git grep -i petscsfgetgraph

will show every use of the function in the source code. Very useful tool

Barry



On Mar 4, 2018, at 1:05 PM, Danyang Su  wrote:



On 18-03-04 08:08 AM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:22 PM, Danyang Su  wrote:
Hi Matt,
I use the latest Fortran style in PETSc 3.8. Enclosed are the PETSc 
configuration, code compiling log and the function that causes compiling error. 
The compiling error happens after I include petscsf.h in the following section. 
I didn't find petscsf.h in petsc/finclude/ folder so I use the head file in the 
'include' folder and this seems not allowed.

I apologize for taking so long. The PetscSF definitions are in

#include 

Hi Matt,

After including
#include 
   use petscis

I still get error saying undefined reference to `petscsfgetgraph_'

Did I miss any other head file?

Thanks,

Danyang

You are correct that they should be moved out.

   Thanks,

  Matt

#ifdef PETSC_V3_8_X

#include 
#include 
#include 
#include 
   use petscsys
   use petscdmplex
   use petscsf

#endif

Thanks,

Danyang


On 18-03-02 12:08 PM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 3:00 PM, Danyang Su  wrote:
On 18-03-02 10:58 AM, Matthew Knepley wrote:

On Fri, Mar 2, 2018 at 1:41 PM, Danyang Su  wrote:

On 18-02-19 03:30 PM, Matthew Knepley wrote:

On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su  wrote:
Hi Matt,

Would you please let me know how to check if a cell is local owned? When overlap 
is 0 in DMPlexDistribute, all the cells are local owned. How about overlap > 0? 
It sounds like impossible to check by node because a cell can be local owned even 
if none of the nodes in this cell is local owned.

If a cell is in the PetscSF, then it is not locally owned. The local nodes in 
the SF are sorted, so I use
PetscFindInt 
(http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html).

Hi Matt,

Would you please give me a little more about how to mark the ghost cells when 
overlap > 0? What do you mean a cell is in the PetscSF? I use PetscSFView to 
export the graph (original mesh file pile.vtk) and it exports all the cells, 
including the ghost cells (PETScSFView.txt).

Yes, I will send you some sample code when I get time. The first problem is 
that you are looking at a different PetscSF. This looks like the
one returned by DMPlexDistribute(). This is mapping the serial mesh to the 
parallel mesh. You want

   
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGetPointSF.html

Then you can look at

   
https://bitbucket.org/petsc/petsc/src/1788fc36644e622df8cb1a0de85676ccc5af0239/src/dm/impls/plex/plexsubmesh.c?at=master=file-view-default#plexsubmesh.c-683

I get the pointSF, get out the list of leaves, and find points in it using 
PetscFindInt()

Hi Matt,
By using the local dm, I can get the PetscSF I want, as shown below. Now I need to get the number 
of ghost cells or local cells (here 4944) or number of leaves (here 825) for each processor. I try 
to use PetscSFGetGraph to get number of leaves in Fortran. After including "petscsf.h", I 
got compilation error saying "You need a ISO C conforming compiler to use the glibc 
headers". Is there any alternative way to do this? I do not 

Re: [petsc-users] Could not execute "['git', 'rev-parse', '--git-dir']"

2018-03-07 Thread Kong, Fande
On Wed, Mar 7, 2018 at 2:51 PM, Satish Balay  wrote:

> On Wed, 7 Mar 2018, Kong, Fande wrote:
>
> > > If you need to workarround this - you can comment out that test
> (3lines)..
> > >
> > >   File "/home/kongf/workhome/projects/petsc-3.7.7/config/
> > > BuildSystem/config/package.py", line 519, in updateGitDir
> > > gitdir,err,ret = config.base.Configure.executeShellCommand([self.
> sourceControl.git,
> > > 'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
> > >
> > >
> >
> > "#self.updateGitDir()"  works.
>
> I meant just the 3 lines - not the whole function.
>

I knew this. "3 lines" does not work at all.

I forgot the error message.

Fande,


>
>
> diff --git a/config/BuildSystem/config/package.py
> b/config/BuildSystem/config/package.py
> index 85663247ce..439b2105c5 100644
> --- a/config/BuildSystem/config/package.py
> +++ b/config/BuildSystem/config/package.py
> @@ -516,9 +516,9 @@ class Package(config.base.Configure):
># verify that packageDir is actually a git clone
>if not os.path.isdir(os.path.join(self.packageDir,'.git')):
>  raise RuntimeError(self.packageDir +': is not a git repository!
> '+os.path.join(self.packageDir,'.git')+' not found!')
> -  gitdir,err,ret = 
> config.base.Configure.executeShellCommand([self.sourceControl.git,
> 'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
> -  if gitdir != '.git':
> -raise RuntimeError(self.packageDir +': is not a git repository!
> "git rev-parse --gitdir" gives: '+gitdir)
> +  #gitdir,err,ret = 
> config.base.Configure.executeShellCommand([self.sourceControl.git,
> 'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
> +  #if gitdir != '.git':
> +  #  raise RuntimeError(self.packageDir +': is not a git repository!
> "git rev-parse --gitdir" gives: '+gitdir)
>
>prefetch = 0
>if self.gitcommit.startswith('origin/'):
>
> Satish
>


Re: [petsc-users] Could not execute "['git', 'rev-parse', '--git-dir']"

2018-03-07 Thread Kong, Fande
On Wed, Mar 7, 2018 at 1:22 PM, Satish Balay  wrote:

> Its strange that you are getting this error in configure - but not command
> linke.
>
> Does the following option make a difference?
>
> --useThreads=0
> or
> --useThreads=1
>

no difference.


>
> If you need to workarround this - you can comment out that test (3lines)..
>
>   File "/home/kongf/workhome/projects/petsc-3.7.7/config/
> BuildSystem/config/package.py", line 519, in updateGitDir
> gitdir,err,ret = 
> config.base.Configure.executeShellCommand([self.sourceControl.git,
> 'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
>
>

"#self.updateGitDir()"  works.

I still do not understand why.


Fande,


>
> Satish
>
> On Wed, 7 Mar 2018, Kong, Fande wrote:
>
> > [kongf@falcon1 git.hypre]$ git rev-parse  --git-dir
> > .git
> > [kongf@falcon1 git.hypre]$ echo $?
> > 0
> > [kongf@falcon1 git.hypre]$
> > [kongf@falcon1 git.hypre]$ git fsck
> > Checking object directories: 100% (256/256), done.
> > Checking objects: 100% (22710/22710), done.
> > [kongf@falcon1 git.hypre]$
> >
> >
> >
> > But the same  error still persists!
> >
> >
> > Fande,
> >
> > On Wed, Mar 7, 2018 at 12:33 PM, Satish Balay  wrote:
> >
> > > 
> > > balay@asterix /home/balay
> > > $ git rev-parse --git-dir
> > > fatal: Not a git repository (or any parent up to mount point /)
> > > Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not
> set).
> > > balay@asterix /home/balay
> > > $ echo $?
> > > 128
> > > balay@asterix /home/balay
> > > $ cd petsc
> > > balay@asterix /home/balay/petsc (next *=)
> > > $ git rev-parse --git-dir
> > > .git
> > > balay@asterix /home/balay/petsc (next *=)
> > > $ echo $?
> > > 0
> > > balay@asterix /home/balay/petsc (next *=)
> > > $
> > > <
> > >
> > > So - for some reason, git is an error in /home/kongf/workhome/projects/
> > > petsc-3.7.7/arch-linux2-c-opt-64bit/externalpackages/git.hypre
> > >
> > > You can try:
> > >
> > > cd /home/kongf/workhome/projects/petsc-3.7.7/arch-linux2-c-opt-
> > > 64bit/externalpackages/git.hypre
> > > git rev-parse --git-dir
> > > echo $?
> > > git fsck
> > >
> > > If this works - you can rerun configure and see if the error persists
> > >
> > > Satish
> > >
> > >
> > > On Wed, 7 Mar 2018, Kong, Fande wrote:
> > >
> > > > Hi PETSc team:
> > > >
> > > > What is the possible reason for this?
> > > >
> > > > The log file is attached.
> > > >
> > > >
> > > > Fande,
> > > >
> > >
> > >
> >
>
>


Re: [petsc-users] Could not execute "['git', 'rev-parse', '--git-dir']"

2018-03-07 Thread Satish Balay
On Wed, 7 Mar 2018, Kong, Fande wrote:

> > If you need to workarround this - you can comment out that test (3lines)..
> >
> >   File "/home/kongf/workhome/projects/petsc-3.7.7/config/
> > BuildSystem/config/package.py", line 519, in updateGitDir
> > gitdir,err,ret = 
> > config.base.Configure.executeShellCommand([self.sourceControl.git,
> > 'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
> >
> >
> 
> "#self.updateGitDir()"  works.

I meant just the 3 lines - not the whole function.


diff --git a/config/BuildSystem/config/package.py 
b/config/BuildSystem/config/package.py
index 85663247ce..439b2105c5 100644
--- a/config/BuildSystem/config/package.py
+++ b/config/BuildSystem/config/package.py
@@ -516,9 +516,9 @@ class Package(config.base.Configure):
   # verify that packageDir is actually a git clone
   if not os.path.isdir(os.path.join(self.packageDir,'.git')):
 raise RuntimeError(self.packageDir +': is not a git repository! 
'+os.path.join(self.packageDir,'.git')+' not found!')
-  gitdir,err,ret = 
config.base.Configure.executeShellCommand([self.sourceControl.git, 
'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
-  if gitdir != '.git':
-raise RuntimeError(self.packageDir +': is not a git repository! "git 
rev-parse --gitdir" gives: '+gitdir)
+  #gitdir,err,ret = 
config.base.Configure.executeShellCommand([self.sourceControl.git, 
'rev-parse','--git-dir'], cwd=self.packageDir, log = self.log)
+  #if gitdir != '.git':
+  #  raise RuntimeError(self.packageDir +': is not a git repository! "git 
rev-parse --gitdir" gives: '+gitdir)
 
   prefetch = 0
   if self.gitcommit.startswith('origin/'):

Satish


Re: [petsc-users] Leaking Memory From the Future????

2018-03-07 Thread zakaryah .
Your description is really vague and I doubt this has anything to do with
PETSc.

Did you recompile before executing the second test?

Have you stepped through with gdb or other debugger, and have you run with
valgrind?

If so, one thing to be extra careful about - do you have anything in global
or stack arrays, for example, an array declared in your function like
"float array[16];"?  Valgrind won't check bounds for those arrays - there
are other tools which do.  Anyway, it sounds like you have a bug but it's
hard to tell because I don't know what you are trying to do.


On Wed, Mar 7, 2018 at 4:13 PM, Ali Berk Kahraman  wrote:

> Dear All,
>
> I am in an impossible position. My real code is long, so I will briefly
> describe my situation.
>
> I call 2 functions that I have written in my code, it goes like
>
> Function1(Do stuff with MatGetValues and MatSetValues);
> Visualize the Result of Function1 with MatView;
> Function2(Do stuff with MatGetValues and MatSetValues);
>
> The result I get in this situation is wrong, because I know the correct
> result. Function 1 reaches rows of the matrix that it should not. But here
> is where it gets interesting. If I do not call Function2 at all, namely do
> the following,
>
> Function1(Do stuff with MatGetValues and MatSetValues);
> Visualize the Result of Function1 with MatView;
> /* Function2(Do stuff with MatGetValues and MatSetValues); */
>
> Here, the result I see from Function 1 is correct and different from the
> result of the first case. But, for my simulation to work, I must call
> Function2. I have been dealing with this for the whole day, and I still
> have no idea how this happens. The code sees the next line and changes the
> result of the previous line.
>
> Any ideas? Anything similar to this in the literature?
>
> Best Regards to All,
>
> Ali Berk Kahraman
> M.Sc. Student, Bogazici Uni.
> Istanbul, Turkey
>
>
>
>
>
>


Re: [petsc-users] Could not execute "['git', 'rev-parse', '--git-dir']"

2018-03-07 Thread Satish Balay
On Wed, 7 Mar 2018, Kong, Fande wrote:

> > I meant just the 3 lines - not the whole function.
> 
> I knew this. "3 lines" does not work at all.
> 
> I forgot the error message.

Then you are likely to use the wrong [git] snapshot - and not the
snapshot listed by self.gitcommit - for that package.

Satish


Re: [petsc-users] error with MPI + ifort

2018-03-07 Thread Adrián Amor
Hi Praveen,

did you try to include the PETSC_AVOID_MPIF_H? I mean, include:
-DPETSC_AVOID_MPIF_H=1 when using mpiifort and I think that the problem
will be solved.

2018-03-07 15:53 GMT+01:00 Praveen C :

> Dear all
>
> In a code like this
>
> subroutine checkgrid(g)
>
> #include 
>
>use petscsys
>
>use mgrid
>
>use celldata
>
>use comdata
>
>implicit none
>
>type(grid),intent(in) :: g
>
>! Local variables
>
>integer:: i, j, v, tc, nv
>
>PetscInt   :: v1, v2
>
>PetscErrorCode :: ierr
>
>
>! Sum g%nvl over all partitions.
>
>Call MPI_Allreduce(g%nvl, nv, 1, MPI_INT, MPI_SUM, &
>
>   PETSC_COMM_WORLD, ierr); CHKERRQ(ierr)
>
> end subroutine checkgrid
>
>
> we get an error while compiling with mpich-3.2.1 + ifort + petsc-3.8.x
>
> mpifort -c -O3 -fpp -nogen-interface -W1 -WB -DNS -DVERSION=\"f8e6c025\"
> -I/usr/local/share/applications/Intel_Compiler/petsc/include
> -I/usr/local/share/applications/Intel_Compiler/hdf5/include -DHDF5 -Tf
> checkgrid.F90 -o checkgrid.o -free
> checkgrid.F90(40): error #6405: The same named entity from different
> modules and/or program units cannot be referenced.   [MPI_SUM]
>Call MPI_Allreduce(g%nvl, nv, 1, MPI_INT, MPI_SUM, &
>   -^
> compilation aborted for checkgrid.F90 (code 1)
>
> It works fine with clang and gnu compilers.
>
> Thanks
> praveen
>
>


[petsc-users] error with MPI + ifort

2018-03-07 Thread Praveen C
Dear all

In a code like this

subroutine checkgrid(g)

#include 

   use petscsys

   use mgrid

   use celldata

   use comdata

   implicit none

   type(grid),intent(in) :: g

   ! Local variables

   integer:: i, j, v, tc, nv

   PetscInt   :: v1, v2

   PetscErrorCode :: ierr


   ! Sum g%nvl over all partitions.

   Call MPI_Allreduce(g%nvl, nv, 1, MPI_INT, MPI_SUM, &

  PETSC_COMM_WORLD, ierr); CHKERRQ(ierr)

end subroutine checkgrid


we get an error while compiling with mpich-3.2.1 + ifort + petsc-3.8.x

mpifort -c -O3 -fpp -nogen-interface -W1 -WB -DNS -DVERSION=\"f8e6c025\"
-I/usr/local/share/applications/Intel_Compiler/petsc/include
-I/usr/local/share/applications/Intel_Compiler/hdf5/include -DHDF5 -Tf
checkgrid.F90 -o checkgrid.o -free
checkgrid.F90(40): error #6405: The same named entity from different
modules and/or program units cannot be referenced.   [MPI_SUM]
   Call MPI_Allreduce(g%nvl, nv, 1, MPI_INT, MPI_SUM, &
  -^
compilation aborted for checkgrid.F90 (code 1)

It works fine with clang and gnu compilers.

Thanks
praveen


Re: [petsc-users] Scaling problem when cores > 600

2018-03-07 Thread Smith, Barry F.

   What are you using for Poisson log.

   If it is a Poisson problem then almost for sure you should be using Hypre 
BoomerAMG?.

   It sounds like your matrix does not change. You will need to discuss the 
scaling with the hypre people.

   Barry


> On Mar 7, 2018, at 5:38 AM, TAY wee-beng  wrote:
> 
> 
> On 7/3/2018 6:22 AM, Smith, Barry F. wrote:
>>The speed up for "Poisson log" is 1.6425364214878704 = 
>> 5.0848e+02/3.0957e+02
>> 
>> This is lower than I would expect for Hypre BoomerAMG?
>> 
>> Are you doing multiple solves with the same matrix with hypre or is each 
>> solve a new matrix? If each solve is a new matrix then you may be getting 
>> expected behavior since the multigrid AMG construction process does not 
>> scale as well as the application of AMG once it is constructed.
>> 
>> I am forwarding to the hypre team since this is their expertise not ours
>> 
>>Barry
>> 
> Hi,
> 
> My LHS of the eqn does not change. Only the RHS changes at each time step. So 
> should this be expected?
> 
> So maybe I should change to BoomerAMG and compare?
> 
> Will PETSc GAMG give better performance?
> 
> Also, I must add that I only partition in the x and y direction. Will this be 
> a factor?
> 
> Thanks.
> 
>>> On Mar 5, 2018, at 11:19 PM, TAY wee-beng  wrote:
>>> 
>>> 
>>> On 5/3/2018 11:43 AM, Smith, Barry F. wrote:
 360 process
 
 KSPSolve  99 1.0 2.6403e+02 1.0 6.67e+10 1.1 2.7e+05 9.9e+05 
 5.1e+02 15100 17 42 19  15100 17 42 19 87401
 
 1920 processes
 
 KSPSolve  99 1.0 2.3184e+01 1.0 1.32e+10 1.2 1.5e+06 4.3e+05 
 5.1e+02  4100 17 42 19   4100 17 42 19 967717
 
 
 Ratio of number of processes 5.33 ratio of time for KSPSolve  11.388 so 
 the time for the solve is scaling very well (extremely well actually). The 
 problem is
 due to "other time" that is not in KSP solve. Note that the percentage of 
 the total time in KSPSolve went from 15 percent of the runtime to 4 
 percent. This means something outside of KSPSolve is scaling very poorly. 
 You will need to profile the rest of the code to determine where the time 
 is being spent. PetscLogEventRegister()  and PetscLogEventBegin/End() will 
 be needed in your code. Already with 360 processes the linear solver is 
 only taking 15 percent of the time.
 
   Barry
 
>>> Hi,
>>> 
>>> I have attached the new logging results with the HYPRE Poisson eqn solver. 
>>> However, due to some problems, I am now using Intel 2018. Should be quite 
>>> similar to 2016 in terms of runtime. Using 360 processes can't work this 
>>> time, and I'm not sure why though.
 
> On Mar 4, 2018, at 9:23 PM, TAY wee-beng  wrote:
> 
> 
> On 1/3/2018 12:14 PM, Smith, Barry F. wrote:
>>> On Feb 28, 2018, at 8:01 PM, TAY wee-beng  wrote:
>>> 
>>> 
>>> On 1/3/2018 12:10 AM, Matthew Knepley wrote:
 On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng  
 wrote:
 Hi,
 
 I have a CFD code which uses PETSc and HYPRE. I found that for a 
 certain case with grid size of 192,570,048, I encounter scaling 
 problem when my cores > 600. At 600 cores, the code took 10min for 100 
 time steps. At 960, 1440 and 2880 cores, it still takes around 10min. 
 At 360 cores, it took 15min.
 
 So how can I find the bottleneck? Any recommended steps?
 
 For any performance question, we need to see the output of -log_view 
 for all test cases.
>>> Hi,
>>> 
>>> To be more specific, I use PETSc KSPBCGS and HYPRE geometric multigrid 
>>> (entirely based on HYPRE, no PETSc) for the momentum and Poisson eqns 
>>> in my code.
>>> 
>>> So can log_view be used in this case to give a meaningful? Since part 
>>> of the code uses HYPRE?
>>   Yes, just send the logs.
>> 
> Hi,
> 
> I have attached the logs, with the number indicating the no. of cores 
> used. Some of the new results are different from the previous runs, 
> although I'm using the same cluster.
> 
> Thanks for the help.
>>> I also program another subroutine in the past which uses PETSc to solve 
>>> the Poisson eqn. It uses either HYPRE's boomeramg, KSPBCGS or KSPGMRES.
>>> 
>>> If I use boomeramg, can log_view be used in this case?
>>> 
>>> Or do I have to use KSPBCGS or KSPGMRES, which is directly from PETSc? 
>>> However, I ran KSPGMRES yesterday with the Poisson eqn and my ans 
>>> didn't converge.
>>> 
>>> Thanks.
  I must also mention that I partition my grid only in the x and y 
 direction. There is no partitioning in the z direction due to limited 
 code development. I wonder if there is a strong effect in this case.
 
 Maybe. Usually 

Re: [petsc-users] error with MPI + ifort

2018-03-07 Thread Smith, Barry F.

   Start with one use statement and slowly add more. My guess is multiple ones 
of those modules use the MPI module hence MPI_SUM appears through multiple 
paths.

   Barry


> On Mar 7, 2018, at 8:53 AM, Praveen C  wrote:
> 
> Dear all
> 
> In a code like this
> 
> subroutine checkgrid(g)
> #include 
>use petscsys
>use mgrid
>use celldata
>use comdata
>implicit none
>type(grid),intent(in) :: g
>! Local variables
>integer:: i, j, v, tc, nv
>PetscInt   :: v1, v2
>PetscErrorCode :: ierr
> 
>! Sum g%nvl over all partitions.
>Call MPI_Allreduce(g%nvl, nv, 1, MPI_INT, MPI_SUM, &
>   PETSC_COMM_WORLD, ierr); CHKERRQ(ierr)
> end subroutine checkgrid
> 
> we get an error while compiling with mpich-3.2.1 + ifort + petsc-3.8.x
> 
> mpifort -c -O3 -fpp -nogen-interface -W1 -WB -DNS -DVERSION=\"f8e6c025\" 
> -I/usr/local/share/applications/Intel_Compiler/petsc/include 
> -I/usr/local/share/applications/Intel_Compiler/hdf5/include -DHDF5 -Tf 
> checkgrid.F90 -o checkgrid.o -free
> checkgrid.F90(40): error #6405: The same named entity from different modules 
> and/or program units cannot be referenced.   [MPI_SUM]
>Call MPI_Allreduce(g%nvl, nv, 1, MPI_INT, MPI_SUM, &
>   -^
> compilation aborted for checkgrid.F90 (code 1)
> 
> It works fine with clang and gnu compilers.
> 
> Thanks
> praveen
> 



Re: [petsc-users] error with MPI + ifort

2018-03-07 Thread Smith, Barry F.

   You have multiple 

use 

make sure they don't have multiple paths back to MPI. For example if two of 
them have use petscsys etc

   Barry


> On Mar 7, 2018, at 10:11 AM, Praveen C  wrote:
> 
> 
> 
>> On 07-Mar-2018, at 9:34 PM, Smith, Barry F.  wrote:
>> 
>>   Start with one use statement and slowly add more. My guess is multiple 
>> ones of those modules use the MPI module hence MPI_SUM appears through 
>> multiple paths.
>> 
>>   Barry
> 
> Hello Barry
> 
> If you mean use of “use mpi”, then I have dont have this anywhere in my code. 
> I only include petsc includes and modules.
> 
> Thanks
> praveen



Re: [petsc-users] error with MPI + ifort

2018-03-07 Thread Praveen C


> On 07-Mar-2018, at 9:52 PM, Smith, Barry F.  wrote:
> 
>   You have multiple 
> 
>use 
> 
>make sure they don't have multiple paths back to MPI. For example if two 
> of them have use petscsys etc
> 
>   Barry

None of the other modules I include have “use petscsys” inside them.

Thanks
praveen

Re: [petsc-users] error with MPI + ifort

2018-03-07 Thread Satish Balay
One gets such a error message if mpif.h is used in multiple modules.

mpi.mod [if used] is one module that will have this symbol. And if
there is a user module [or petsc module] also using mpif.h - then it
will get a duplicate copy of these symbols - and will result in such a
conflict.

Satish


On Wed, 7 Mar 2018, Smith, Barry F. wrote:

> 
>You have multiple 
> 
> use 
> 
> make sure they don't have multiple paths back to MPI. For example if two 
> of them have use petscsys etc
> 
>Barry
> 
> 
> > On Mar 7, 2018, at 10:11 AM, Praveen C  wrote:
> > 
> > 
> > 
> >> On 07-Mar-2018, at 9:34 PM, Smith, Barry F.  wrote:
> >> 
> >>   Start with one use statement and slowly add more. My guess is multiple 
> >> ones of those modules use the MPI module hence MPI_SUM appears through 
> >> multiple paths.
> >> 
> >>   Barry
> > 
> > Hello Barry
> > 
> > If you mean use of “use mpi”, then I have dont have this anywhere in my 
> > code. I only include petsc includes and modules.
> > 
> > Thanks
> > praveen
> 
>