Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Danyang Su
Hi Satish,

For some unknown reason during Cmake 3.18.5 installation, I get error "Cannot 
find a C++ compiler that supports both C++11 and the specified C++ flags.". The 
system installed Cmake 3.2.3 is way too old. 

I will just leave it as is since superlu_dist is optional in my model. 

Thanks for your suggestions to make it work,

Danyang

On 2023-01-19, 4:52 PM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote:


Looks like .bashrc is getting sourced again during the build process [as make 
creates new bash shell during the build] - thus overriding the env variable 
that's set.


Glad you have a working build now. Thanks for the update!


BTW: superlu-dist requires cmake 3.18.1 or higher. You could check if this 
older version of cmake builds on this cluster [if you want to give superlu-dist 
a try again]


Satish




On Thu, 19 Jan 2023, Danyang Su wrote:


> Hi Satish,
> 
> That's a bit strange since I have already use export
> PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.18.3.
> 
> Yes, I have petsc 3.13.6 installed and has PETSC_DIR set in the bashrc file.
> After changing PETSC_DIR in the bashrc file, PETSc can be compiled now.
> 
> Thanks,
> 
> Danyang
> 
> On 2023-01-19 3:58 p.m., Satish Balay wrote:
> >> /home/danyangs/soft/petsc/petsc-3.13.6/src/sys/makefile contains a
> >> directory not on the filesystem: ['\\']
> >
> > Its strange that its complaining about petsc-3.13.6. Do you have this
> > location set in your .bashrc or similar file - that's getting sourced during
> > the build?
> >
> > Perhaps you could start with a fresh copy of petsc and retry?
> >
> > Also suggest using 'arch-' prefix for PETSC_ARCH i.e
> > 'arch-intel-14.0.2-openmpi-1.6.5' - just in case there are some bugs lurking
> > with skipping build files in this location
> >
> > Satish
> >
> >
> > On Thu, 19 Jan 2023, Danyang Su wrote:
> >
> >> Hi Barry and Satish,
> >>
> >> I guess there is compatibility problem with some external package. The
> >> latest
> >> CMake complains about the compiler, so I remove superlu_dist option since I
> >> rarely use it. Then the HYPRE package shows "Error: Hypre requires C++
> >> compiler. None specified", which is a bit tricky since c++ compiler is
> >> specified in the configuration so I comment the related error code in
> >> hypre.py
> >> during configuration. After doing this, there is no error during PETSc
> >> configuration but new error occurs during make process.
> >>
> >> **ERROR*
> >> Error during compile, check
> >> intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/make.log
> >> Send it and intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/configure.log to
> >> petsc-ma...@mcs.anl.gov 
> >> 
> >>
> >> It might be not worth checking this problem since most of the users do not
> >> work on such old cluster. Both log files are attached in case any developer
> >> wants to check. Please let me know if there is any suggestions and I am
> >> willing to make a test.
> >>
> >> Thanks,
> >>
> >> Danyang
> >>
> >> On 2023-01-19 11:18 a.m., Satish Balay wrote:
> >>> BTW: cmake is required by superlu-dist not petsc.
> >>>
> >>> And its possible that petsc might not build with this old version of
> >>> openmpi
> >>> - [and/or the externalpackages that you are installing - might not build
> >>> with this old version of intel compilers].
> >>>
> >>> Satish
> >>>
> >>> On Thu, 19 Jan 2023, Barry Smith wrote:
> >>>
>  Remove
>  --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
>  and install CMake yourself. Then configure PETSc with
>  --with-cmake=directory you installed it in.
> 
>  Barry
> 
> 
> > On Jan 19, 2023, at 1:46 PM, Danyang Su  > > wrote:
> >
> > Hi All,
> >
> > I am trying to install the latest PETSc on an old cluster but always get
> > some error information at the step of cmake. The system installed cmake
> > is
> > V3.2.3, which is out-of-date for PETSc. I tried to use --download-cmake
> > first, it does not work. Then I tried to clean everything (delete the
> > petsc_arch folder), download the latest cmake myself and pass the path
> > to
> > the configuration, the error is still there.
> >
> > The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have
> > no
> > problem to install PETSc-3.13.6 there. The latest version cannot pass
> > configuration, unfortunately. Attached is the last configuration I have
> > tried.
> >
> > --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
> > --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
> > --download-mumps --download-scalapack --download-parmetis
> > --download-metis
> > --download-ptscotch --download-fblaslapack --download-hypre
> > 

Re: [petsc-users] Interpreting Redistribution SF

2023-01-19 Thread Nicholas Arnold-Medabalimi
Ok thanks for the clarification. In theory, if before the Reduction back to
the original distribution, if I call DMGlobaltoLocal then even with
MPI_REPLACE all the leafs corresponding to the original root should have
the same value so I won't have an ambiguity, correct?



On Thu, Jan 19, 2023 at 9:28 PM Matthew Knepley  wrote:

> On Thu, Jan 19, 2023 at 9:13 PM Nicholas Arnold-Medabalimi <
> narno...@umich.edu> wrote:
>
>> Hi Matt
>>
>> Yep, that makes sense and is consistent.
>>
>> My question is a little more specific. So let's say I take an
>> initial mesh and distribute it and get the distribution SF with an overlap
>> of one. Consider a cell that is a root on process 0 and a leaf on process 1
>> after the distribution.
>>
>> Will the distribution pointSF have an entry for the cell that is a leaf
>> in the ghost cell sense?
>>
>> I guess, in short does the distribution SF only have entries for the
>> movement of points that are roots in the ghost SF?
>>
>
> I do not understand the question. Suppose that a certain cell, say 0, in
> the original distribution goes to two different processes, say 0 and 1, and
> will happen when you distribute with overlap. Then the migration SF has two
> leaf entries for that cell, one from process 0 and one from process 1. They
> both point to root cell 0 on process 0.
>
>
>> Sorry if this is a little unclear.
>>
>> Maybe my usage will be a bit clearer. I am generating a distributionSF
>> (type 2 in your desc) then using that to generate a dof distribution(type
>> 3) using the section information. I then pass the information from the
>> initial distribution to new distribution with PetscSFBcast with
>> MPI_REPLACE. That scatters the vector to the new distribution. I then do
>> "stuff" and now want to redistribute back. So I pass the same dof
>> distributionSF but call PetscSFReduce with MPI_REPLACE. My concern is I am
>> only setting the root cell values on each partition. So if the ghost cells
>> are part of the distribution SF there will be multiple cells reducing to
>> the original distribution cell?
>>
>
> Yes, definitely.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks
>> Nicholas
>>
>>
>> On Thu, Jan 19, 2023 at 8:28 PM Matthew Knepley 
>> wrote:
>>
>>> On Thu, Jan 19, 2023 at 11:58 AM Nicholas Arnold-Medabalimi <
>>> narno...@umich.edu> wrote:
>>>
 Hi Petsc Users

 I'm working with a distribution start forest generated by
 DMPlexDistribute and PetscSFBcast and Reduce to move data between the
 initial distribution and the distribution generated by DMPlex Distribute.

 I'm trying to debug some values that aren't being copied properly and
 wanted to verify I understand how a redistribution SF works compared with a
 SF that describes overlapped points.

   [0] 0 <- (0,7) point 0 on the distributed plex is point 7 on
 process 0 on the initial distribution
   [0] 1 <- (0,8) point 1 on the distributed plex is point 8 on
 process 0 on the initial distribution
   [0] 2 <- (0,9)
   [0] 3 <- (0,10)
   [0] 4 <- (0,11)

   [1] 0 <- (1,0) point 0 on the distributed plex is point 0 on
 process 1 on the initial distribution
   [1] 1 <- (1,1)
   [1] 2 <- (1,2)
   [1] 3 <- (0,0) point 3 on the distributed plex is point 0 on
 process 0 on the initial distribution
   [1] 4 <- (0,1)
   [1] 5 <- (0,2)

  my confusion I think is how does the distributionSF inform of what
 cells will be leafs on the distribution?

>>>
>>> I should eventually write something to clarify this. I am using SF in
>>> (at least) two different ways.
>>>
>>> First, there is a familiar SF that we use for dealing with "ghost"
>>> points. These are replicated points where one process
>>> is said to "own" the point and another process is said to hold a
>>> "ghost". The ghost points are leaves in the SF which
>>> point back to the root point owned by another process. We call this the
>>> pointSF for a DM.
>>>
>>> Second, we have a migration SF. Here the root points give the original
>>> point distribution. The leaf points give the new
>>> point distribution. Thus a PetscSFBcast() pushes points from the
>>> original to new distribution, which is what we mean
>>> by a migration.
>>>
>>> Third, instead of point values, we might want to communicate fields over
>>> those points. For this we make new SFes,
>>> where the numbering does not refer to points, but rather to dofs.
>>>
>>> Does this make sense?
>>>
>>>   Thanks,
>>>
>>> Matt
>>>
>>>
 Sincerely
 Nicholas

 --
 Nicholas Arnold-Medabalimi

 Ph.D. Candidate
 Computational Aeroscience Lab
 University of Michigan

>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> 

Re: [petsc-users] Interpreting Redistribution SF

2023-01-19 Thread Matthew Knepley
On Thu, Jan 19, 2023 at 9:13 PM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:

> Hi Matt
>
> Yep, that makes sense and is consistent.
>
> My question is a little more specific. So let's say I take an initial mesh
> and distribute it and get the distribution SF with an overlap of one.
> Consider a cell that is a root on process 0 and a leaf on process 1 after
> the distribution.
>
> Will the distribution pointSF have an entry for the cell that is a leaf in
> the ghost cell sense?
>
> I guess, in short does the distribution SF only have entries for the
> movement of points that are roots in the ghost SF?
>

I do not understand the question. Suppose that a certain cell, say 0, in
the original distribution goes to two different processes, say 0 and 1, and
will happen when you distribute with overlap. Then the migration SF has two
leaf entries for that cell, one from process 0 and one from process 1. They
both point to root cell 0 on process 0.


> Sorry if this is a little unclear.
>
> Maybe my usage will be a bit clearer. I am generating a distributionSF
> (type 2 in your desc) then using that to generate a dof distribution(type
> 3) using the section information. I then pass the information from the
> initial distribution to new distribution with PetscSFBcast with
> MPI_REPLACE. That scatters the vector to the new distribution. I then do
> "stuff" and now want to redistribute back. So I pass the same dof
> distributionSF but call PetscSFReduce with MPI_REPLACE. My concern is I am
> only setting the root cell values on each partition. So if the ghost cells
> are part of the distribution SF there will be multiple cells reducing to
> the original distribution cell?
>

Yes, definitely.

  Thanks,

 Matt


> Thanks
> Nicholas
>
>
> On Thu, Jan 19, 2023 at 8:28 PM Matthew Knepley  wrote:
>
>> On Thu, Jan 19, 2023 at 11:58 AM Nicholas Arnold-Medabalimi <
>> narno...@umich.edu> wrote:
>>
>>> Hi Petsc Users
>>>
>>> I'm working with a distribution start forest generated by
>>> DMPlexDistribute and PetscSFBcast and Reduce to move data between the
>>> initial distribution and the distribution generated by DMPlex Distribute.
>>>
>>> I'm trying to debug some values that aren't being copied properly and
>>> wanted to verify I understand how a redistribution SF works compared with a
>>> SF that describes overlapped points.
>>>
>>>   [0] 0 <- (0,7) point 0 on the distributed plex is point 7 on
>>> process 0 on the initial distribution
>>>   [0] 1 <- (0,8) point 1 on the distributed plex is point 8 on
>>> process 0 on the initial distribution
>>>   [0] 2 <- (0,9)
>>>   [0] 3 <- (0,10)
>>>   [0] 4 <- (0,11)
>>>
>>>   [1] 0 <- (1,0) point 0 on the distributed plex is point 0 on
>>> process 1 on the initial distribution
>>>   [1] 1 <- (1,1)
>>>   [1] 2 <- (1,2)
>>>   [1] 3 <- (0,0) point 3 on the distributed plex is point 0 on
>>> process 0 on the initial distribution
>>>   [1] 4 <- (0,1)
>>>   [1] 5 <- (0,2)
>>>
>>>  my confusion I think is how does the distributionSF inform of what
>>> cells will be leafs on the distribution?
>>>
>>
>> I should eventually write something to clarify this. I am using SF in (at
>> least) two different ways.
>>
>> First, there is a familiar SF that we use for dealing with "ghost"
>> points. These are replicated points where one process
>> is said to "own" the point and another process is said to hold a "ghost".
>> The ghost points are leaves in the SF which
>> point back to the root point owned by another process. We call this the
>> pointSF for a DM.
>>
>> Second, we have a migration SF. Here the root points give the original
>> point distribution. The leaf points give the new
>> point distribution. Thus a PetscSFBcast() pushes points from the original
>> to new distribution, which is what we mean
>> by a migration.
>>
>> Third, instead of point values, we might want to communicate fields over
>> those points. For this we make new SFes,
>> where the numbering does not refer to points, but rather to dofs.
>>
>> Does this make sense?
>>
>>   Thanks,
>>
>> Matt
>>
>>
>>> Sincerely
>>> Nicholas
>>>
>>> --
>>> Nicholas Arnold-Medabalimi
>>>
>>> Ph.D. Candidate
>>> Computational Aeroscience Lab
>>> University of Michigan
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>
>
> --
> Nicholas Arnold-Medabalimi
>
> Ph.D. Candidate
> Computational Aeroscience Lab
> University of Michigan
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Interpreting Redistribution SF

2023-01-19 Thread Nicholas Arnold-Medabalimi
Hi Matt

Yep, that makes sense and is consistent.

My question is a little more specific. So let's say I take an initial mesh
and distribute it and get the distribution SF with an overlap of one.
Consider a cell that is a root on process 0 and a leaf on process 1 after
the distribution.

Will the distribution pointSF have an entry for the cell that is a leaf in
the ghost cell sense?

I guess, in short does the distribution SF only have entries for the
movement of points that are roots in the ghost SF?

Sorry if this is a little unclear.

Maybe my usage will be a bit clearer. I am generating a distributionSF
(type 2 in your desc) then using that to generate a dof distribution(type
3) using the section information. I then pass the information from the
initial distribution to new distribution with PetscSFBcast with
MPI_REPLACE. That scatters the vector to the new distribution. I then do
"stuff" and now want to redistribute back. So I pass the same dof
distributionSF but call PetscSFReduce with MPI_REPLACE. My concern is I am
only setting the root cell values on each partition. So if the ghost cells
are part of the distribution SF there will be multiple cells reducing to
the original distribution cell?


Thanks
Nicholas


On Thu, Jan 19, 2023 at 8:28 PM Matthew Knepley  wrote:

> On Thu, Jan 19, 2023 at 11:58 AM Nicholas Arnold-Medabalimi <
> narno...@umich.edu> wrote:
>
>> Hi Petsc Users
>>
>> I'm working with a distribution start forest generated by
>> DMPlexDistribute and PetscSFBcast and Reduce to move data between the
>> initial distribution and the distribution generated by DMPlex Distribute.
>>
>> I'm trying to debug some values that aren't being copied properly and
>> wanted to verify I understand how a redistribution SF works compared with a
>> SF that describes overlapped points.
>>
>>   [0] 0 <- (0,7) point 0 on the distributed plex is point 7 on
>> process 0 on the initial distribution
>>   [0] 1 <- (0,8) point 1 on the distributed plex is point 8 on
>> process 0 on the initial distribution
>>   [0] 2 <- (0,9)
>>   [0] 3 <- (0,10)
>>   [0] 4 <- (0,11)
>>
>>   [1] 0 <- (1,0) point 0 on the distributed plex is point 0 on
>> process 1 on the initial distribution
>>   [1] 1 <- (1,1)
>>   [1] 2 <- (1,2)
>>   [1] 3 <- (0,0) point 3 on the distributed plex is point 0 on
>> process 0 on the initial distribution
>>   [1] 4 <- (0,1)
>>   [1] 5 <- (0,2)
>>
>>  my confusion I think is how does the distributionSF inform of what cells
>> will be leafs on the distribution?
>>
>
> I should eventually write something to clarify this. I am using SF in (at
> least) two different ways.
>
> First, there is a familiar SF that we use for dealing with "ghost" points.
> These are replicated points where one process
> is said to "own" the point and another process is said to hold a "ghost".
> The ghost points are leaves in the SF which
> point back to the root point owned by another process. We call this the
> pointSF for a DM.
>
> Second, we have a migration SF. Here the root points give the original
> point distribution. The leaf points give the new
> point distribution. Thus a PetscSFBcast() pushes points from the original
> to new distribution, which is what we mean
> by a migration.
>
> Third, instead of point values, we might want to communicate fields over
> those points. For this we make new SFes,
> where the numbering does not refer to points, but rather to dofs.
>
> Does this make sense?
>
>   Thanks,
>
> Matt
>
>
>> Sincerely
>> Nicholas
>>
>> --
>> Nicholas Arnold-Medabalimi
>>
>> Ph.D. Candidate
>> Computational Aeroscience Lab
>> University of Michigan
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


-- 
Nicholas Arnold-Medabalimi

Ph.D. Candidate
Computational Aeroscience Lab
University of Michigan


Re: [petsc-users] Interpreting Redistribution SF

2023-01-19 Thread Matthew Knepley
On Thu, Jan 19, 2023 at 11:58 AM Nicholas Arnold-Medabalimi <
narno...@umich.edu> wrote:

> Hi Petsc Users
>
> I'm working with a distribution start forest generated by
> DMPlexDistribute and PetscSFBcast and Reduce to move data between the
> initial distribution and the distribution generated by DMPlex Distribute.
>
> I'm trying to debug some values that aren't being copied properly and
> wanted to verify I understand how a redistribution SF works compared with a
> SF that describes overlapped points.
>
>   [0] 0 <- (0,7) point 0 on the distributed plex is point 7 on process
> 0 on the initial distribution
>   [0] 1 <- (0,8) point 1 on the distributed plex is point 8 on process
> 0 on the initial distribution
>   [0] 2 <- (0,9)
>   [0] 3 <- (0,10)
>   [0] 4 <- (0,11)
>
>   [1] 0 <- (1,0) point 0 on the distributed plex is point 0 on process
> 1 on the initial distribution
>   [1] 1 <- (1,1)
>   [1] 2 <- (1,2)
>   [1] 3 <- (0,0) point 3 on the distributed plex is point 0 on process
> 0 on the initial distribution
>   [1] 4 <- (0,1)
>   [1] 5 <- (0,2)
>
>  my confusion I think is how does the distributionSF inform of what cells
> will be leafs on the distribution?
>

I should eventually write something to clarify this. I am using SF in (at
least) two different ways.

First, there is a familiar SF that we use for dealing with "ghost" points.
These are replicated points where one process
is said to "own" the point and another process is said to hold a "ghost".
The ghost points are leaves in the SF which
point back to the root point owned by another process. We call this the
pointSF for a DM.

Second, we have a migration SF. Here the root points give the original
point distribution. The leaf points give the new
point distribution. Thus a PetscSFBcast() pushes points from the original
to new distribution, which is what we mean
by a migration.

Third, instead of point values, we might want to communicate fields over
those points. For this we make new SFes,
where the numbering does not refer to points, but rather to dofs.

Does this make sense?

  Thanks,

Matt


> Sincerely
> Nicholas
>
> --
> Nicholas Arnold-Medabalimi
>
> Ph.D. Candidate
> Computational Aeroscience Lab
> University of Michigan
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Satish Balay via petsc-users
Looks like .bashrc is getting sourced again during the build process [as make 
creates new bash shell during the build] - thus overriding the env variable 
that's set.

Glad you have a working build now. Thanks for the update!

BTW: superlu-dist requires cmake 3.18.1 or higher. You could check if this 
older version of cmake builds on this cluster [if you want to give superlu-dist 
a try again]

Satish


On Thu, 19 Jan 2023, Danyang Su wrote:

> Hi Satish,
> 
> That's a bit strange since I have already use export
> PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.18.3.
> 
> Yes, I have petsc 3.13.6 installed and has PETSC_DIR set in the bashrc file.
> After changing PETSC_DIR in the bashrc file, PETSc can be compiled now.
> 
> Thanks,
> 
> Danyang
> 
> On 2023-01-19 3:58 p.m., Satish Balay wrote:
> >> /home/danyangs/soft/petsc/petsc-3.13.6/src/sys/makefile contains a
> >> directory not on the filesystem: ['\\']
> >
> > Its strange that its complaining about petsc-3.13.6. Do you have this
> > location set in your .bashrc or similar file - that's getting sourced during
> > the build?
> >
> > Perhaps you could start with a fresh copy of petsc and retry?
> >
> > Also suggest using 'arch-' prefix for PETSC_ARCH i.e
> > 'arch-intel-14.0.2-openmpi-1.6.5' - just in case there are some bugs lurking
> > with skipping build files in this location
> >
> > Satish
> >
> >
> > On Thu, 19 Jan 2023, Danyang Su wrote:
> >
> >> Hi Barry and Satish,
> >>
> >> I guess there is compatibility problem with some external package. The
> >> latest
> >> CMake complains about the compiler, so I remove superlu_dist option since I
> >> rarely use it. Then the HYPRE package shows "Error: Hypre requires C++
> >> compiler. None specified", which is a bit tricky since c++ compiler is
> >> specified in the configuration so I comment the related error code in
> >> hypre.py
> >> during configuration. After doing this, there is no error during PETSc
> >> configuration but new error occurs during make process.
> >>
> >> **ERROR*
> >>   Error during compile, check
> >> intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/make.log
> >>   Send it and intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/configure.log to
> >> petsc-ma...@mcs.anl.gov
> >> 
> >>
> >> It might be not worth checking this problem since most of the users do not
> >> work on such old cluster. Both log files are attached in case any developer
> >> wants to check. Please let me know if there is any suggestions and I am
> >> willing to make a test.
> >>
> >> Thanks,
> >>
> >> Danyang
> >>
> >> On 2023-01-19 11:18 a.m., Satish Balay wrote:
> >>> BTW: cmake is required by superlu-dist not petsc.
> >>>
> >>> And its possible that petsc might not build with this old version of
> >>> openmpi
> >>> - [and/or the externalpackages that you are installing - might not build
> >>> with this old version of intel compilers].
> >>>
> >>> Satish
> >>>
> >>> On Thu, 19 Jan 2023, Barry Smith wrote:
> >>>
>  Remove
>  
>  --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
>  and install CMake yourself. Then configure PETSc with
>  --with-cmake=directory you installed it in.
> 
>  Barry
> 
> 
> > On Jan 19, 2023, at 1:46 PM, Danyang Su  wrote:
> >
> > Hi All,
> >
> > I am trying to install the latest PETSc on an old cluster but always get
> > some error information at the step of cmake. The system installed cmake
> > is
> > V3.2.3, which is out-of-date for PETSc. I tried to use --download-cmake
> > first, it does not work. Then I tried to clean everything (delete the
> > petsc_arch folder), download the latest cmake myself and pass the path
> > to
> > the configuration, the error is still there.
> >
> > The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have
> > no
> > problem to install PETSc-3.13.6 there. The latest version cannot pass
> > configuration, unfortunately. Attached is the last configuration I have
> > tried.
> >
> > --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
> > --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
> > --download-mumps --download-scalapack --download-parmetis
> > --download-metis
> > --download-ptscotch --download-fblaslapack --download-hypre
> > --download-superlu_dist --download-hdf5=yes --with-hdf5-fortran-bindings
> > --with-debugging=0 COPTFLAGS="-O2 -march=native -mtune=native"
> > CXXOPTFLAGS="-O2 -march=native -mtune=native" FOPTFLAGS="-O2
> > -march=native
> > -mtune=native"
> >
> > Is there any solution for this.
> >
> > Thanks,
> >
> > Danyang
> >
> >
> > 
> 


Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Danyang Su

Hi Satish,

That's a bit strange since I have already use export 
PETSC_DIR=/home/danyangs/soft/petsc/petsc-3.18.3.


Yes, I have petsc 3.13.6 installed and has PETSC_DIR set in the bashrc 
file. After changing PETSC_DIR in the bashrc file, PETSc can be compiled 
now.


Thanks,

Danyang

On 2023-01-19 3:58 p.m., Satish Balay wrote:

/home/danyangs/soft/petsc/petsc-3.13.6/src/sys/makefile contains a directory 
not on the filesystem: ['\\']


Its strange that its complaining about petsc-3.13.6. Do you have this location 
set in your .bashrc or similar file - that's getting sourced during the build?

Perhaps you could start with a fresh copy of petsc and retry?

Also suggest using 'arch-' prefix for PETSC_ARCH i.e 
'arch-intel-14.0.2-openmpi-1.6.5' - just in case there are some bugs lurking 
with skipping build files in this location

Satish


On Thu, 19 Jan 2023, Danyang Su wrote:


Hi Barry and Satish,

I guess there is compatibility problem with some external package. The latest
CMake complains about the compiler, so I remove superlu_dist option since I
rarely use it. Then the HYPRE package shows "Error: Hypre requires C++
compiler. None specified", which is a bit tricky since c++ compiler is
specified in the configuration so I comment the related error code in hypre.py
during configuration. After doing this, there is no error during PETSc
configuration but new error occurs during make process.

**ERROR*
   Error during compile, check
intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/make.log
   Send it and intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/configure.log to
petsc-ma...@mcs.anl.gov


It might be not worth checking this problem since most of the users do not
work on such old cluster. Both log files are attached in case any developer
wants to check. Please let me know if there is any suggestions and I am
willing to make a test.

Thanks,

Danyang

On 2023-01-19 11:18 a.m., Satish Balay wrote:

BTW: cmake is required by superlu-dist not petsc.

And its possible that petsc might not build with this old version of openmpi
- [and/or the externalpackages that you are installing - might not build
with this old version of intel compilers].

Satish

On Thu, 19 Jan 2023, Barry Smith wrote:


Remove

--download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
and install CMake yourself. Then configure PETSc with
--with-cmake=directory you installed it in.

Barry



On Jan 19, 2023, at 1:46 PM, Danyang Su  wrote:

Hi All,

I am trying to install the latest PETSc on an old cluster but always get
some error information at the step of cmake. The system installed cmake is
V3.2.3, which is out-of-date for PETSc. I tried to use --download-cmake
first, it does not work. Then I tried to clean everything (delete the
petsc_arch folder), download the latest cmake myself and pass the path to
the configuration, the error is still there.

The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have no
problem to install PETSc-3.13.6 there. The latest version cannot pass
configuration, unfortunately. Attached is the last configuration I have
tried.

--with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
--download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
--download-mumps --download-scalapack --download-parmetis --download-metis
--download-ptscotch --download-fblaslapack --download-hypre
--download-superlu_dist --download-hdf5=yes --with-hdf5-fortran-bindings
--with-debugging=0 COPTFLAGS="-O2 -march=native -mtune=native"
CXXOPTFLAGS="-O2 -march=native -mtune=native" FOPTFLAGS="-O2 -march=native
-mtune=native"

Is there any solution for this.

Thanks,

Danyang





Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Satish Balay via petsc-users
> /home/danyangs/soft/petsc/petsc-3.13.6/src/sys/makefile contains a directory 
> not on the filesystem: ['\\']


Its strange that its complaining about petsc-3.13.6. Do you have this location 
set in your .bashrc or similar file - that's getting sourced during the build?

Perhaps you could start with a fresh copy of petsc and retry?

Also suggest using 'arch-' prefix for PETSC_ARCH i.e 
'arch-intel-14.0.2-openmpi-1.6.5' - just in case there are some bugs lurking 
with skipping build files in this location

Satish


On Thu, 19 Jan 2023, Danyang Su wrote:

> Hi Barry and Satish,
> 
> I guess there is compatibility problem with some external package. The latest
> CMake complains about the compiler, so I remove superlu_dist option since I
> rarely use it. Then the HYPRE package shows "Error: Hypre requires C++
> compiler. None specified", which is a bit tricky since c++ compiler is
> specified in the configuration so I comment the related error code in hypre.py
> during configuration. After doing this, there is no error during PETSc
> configuration but new error occurs during make process.
> 
> **ERROR*
>   Error during compile, check
> intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/make.log
>   Send it and intel-14.0.2-openmpi-1.6.5/lib/petsc/conf/configure.log to
> petsc-ma...@mcs.anl.gov
> 
> 
> It might be not worth checking this problem since most of the users do not
> work on such old cluster. Both log files are attached in case any developer
> wants to check. Please let me know if there is any suggestions and I am
> willing to make a test.
> 
> Thanks,
> 
> Danyang
> 
> On 2023-01-19 11:18 a.m., Satish Balay wrote:
> > BTW: cmake is required by superlu-dist not petsc.
> >
> > And its possible that petsc might not build with this old version of openmpi
> > - [and/or the externalpackages that you are installing - might not build
> > with this old version of intel compilers].
> >
> > Satish
> >
> > On Thu, 19 Jan 2023, Barry Smith wrote:
> >
> >>Remove
> >>
> >> --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
> >>and install CMake yourself. Then configure PETSc with
> >>--with-cmake=directory you installed it in.
> >>
> >>Barry
> >>
> >>
> >>> On Jan 19, 2023, at 1:46 PM, Danyang Su  wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I am trying to install the latest PETSc on an old cluster but always get
> >>> some error information at the step of cmake. The system installed cmake is
> >>> V3.2.3, which is out-of-date for PETSc. I tried to use --download-cmake
> >>> first, it does not work. Then I tried to clean everything (delete the
> >>> petsc_arch folder), download the latest cmake myself and pass the path to
> >>> the configuration, the error is still there.
> >>>
> >>> The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have no
> >>> problem to install PETSc-3.13.6 there. The latest version cannot pass
> >>> configuration, unfortunately. Attached is the last configuration I have
> >>> tried.
> >>>
> >>> --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
> >>> --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
> >>> --download-mumps --download-scalapack --download-parmetis --download-metis
> >>> --download-ptscotch --download-fblaslapack --download-hypre
> >>> --download-superlu_dist --download-hdf5=yes --with-hdf5-fortran-bindings
> >>> --with-debugging=0 COPTFLAGS="-O2 -march=native -mtune=native"
> >>> CXXOPTFLAGS="-O2 -march=native -mtune=native" FOPTFLAGS="-O2 -march=native
> >>> -mtune=native"
> >>>
> >>> Is there any solution for this.
> >>>
> >>> Thanks,
> >>>
> >>> Danyang
> >>>
> >>>
> >>> 
> 


Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Satish Balay via petsc-users
BTW: cmake is required by superlu-dist not petsc.

And its possible that petsc might not build with this old version of openmpi - 
[and/or the externalpackages that you are installing - might not build with 
this old version of intel compilers].

Satish

On Thu, 19 Jan 2023, Barry Smith wrote:

> 
>   Remove 
> --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
>   and install CMake yourself. Then configure PETSc with 
> --with-cmake=directory you installed it in.
> 
>   Barry
> 
> 
> > On Jan 19, 2023, at 1:46 PM, Danyang Su  wrote:
> > 
> > Hi All,
> > 
> > I am trying to install the latest PETSc on an old cluster but always get 
> > some error information at the step of cmake. The system installed cmake is 
> > V3.2.3, which is out-of-date for PETSc. I tried to use --download-cmake 
> > first, it does not work. Then I tried to clean everything (delete the 
> > petsc_arch folder), download the latest cmake myself and pass the path to 
> > the configuration, the error is still there.
> > 
> > The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have no 
> > problem to install PETSc-3.13.6 there. The latest version cannot pass 
> > configuration, unfortunately. Attached is the last configuration I have 
> > tried.
> > 
> > --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 
> > --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
> >  --download-mumps --download-scalapack --download-parmetis --download-metis 
> > --download-ptscotch --download-fblaslapack --download-hypre 
> > --download-superlu_dist --download-hdf5=yes --with-hdf5-fortran-bindings 
> > --with-debugging=0 COPTFLAGS="-O2 -march=native -mtune=native" 
> > CXXOPTFLAGS="-O2 -march=native -mtune=native" FOPTFLAGS="-O2 -march=native 
> > -mtune=native"
> > 
> > Is there any solution for this.
> > 
> > Thanks,
> > 
> > Danyang
> > 
> > 
> > 
> 



Re: [petsc-users] locally deploy PETSc

2023-01-19 Thread Tim Meehan
Thanks Jed!

I ran:
make clean
./configure --prefix=/opt/petsc
make all check
sudo make install

It then worked like you said, so thanks!

-Original Message-
From: Jed Brown  
Sent: Thursday, January 19, 2023 12:56 PM
To: Tim Meehan ; petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] locally deploy PETSc

Caution: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


You're probably looking for ./configure --prefix=/opt/petsc. It's documented in 
./configure --help.

Tim Meehan  writes:

> Hi - I am trying to set up a local workstation for a few other developers who 
> need PETSc installed from the latest release. I figured that it would be 
> easiest for me to just clone the repository, as mentioned in the Quick Start.
>
> So, in /home/me/opt, I issued:
> git clone -b release 
> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitl
> ab.com%2Fpetsc%2Fpetsc.git=05%7C01%7CTim.Meehan%40grayanalytics.c
> om%7C02f3576f77744f7032e708dafa4edb0f%7C0932a7697e6f40a98297c05af696b8
> 56%7C0%7C0%7C638097513854068046%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLj
> AwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C
> data=kO3MPjPfZ%2B3xaDBts5wG6Rv1%2BlbxEihRmdSxb8MVYNI%3D=0 
> petsc cd petsc ./configure make all check
>
> Things work fine, but I would like to install it in /opt/petsc, minus 
> all of the build derbris
>
> Is there some way to have './configure' do this?
> (I was actually thinking that the configure script was from GNU 
> autotools or something - but obviously not)
>
> Cheers,
> Tim


Re: [petsc-users] Cmake problem on an old cluster

2023-01-19 Thread Barry Smith


  Remove 
--download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
  and install CMake yourself. Then configure PETSc with --with-cmake=directory 
you installed it in.

  Barry


> On Jan 19, 2023, at 1:46 PM, Danyang Su  wrote:
> 
> Hi All,
> 
> I am trying to install the latest PETSc on an old cluster but always get some 
> error information at the step of cmake. The system installed cmake is V3.2.3, 
> which is out-of-date for PETSc. I tried to use --download-cmake first, it 
> does not work. Then I tried to clean everything (delete the petsc_arch 
> folder), download the latest cmake myself and pass the path to the 
> configuration, the error is still there.
> 
> The compiler there is a bit old, intel-14.0.2 and openmpi-1.6.5. I have no 
> problem to install PETSc-3.13.6 there. The latest version cannot pass 
> configuration, unfortunately. Attached is the last configuration I have tried.
> 
> --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 
> --download-cmake=/home/danyangs/soft/petsc/petsc-3.18.3/packages/cmake-3.25.1.tar.gz
>  --download-mumps --download-scalapack --download-parmetis --download-metis 
> --download-ptscotch --download-fblaslapack --download-hypre 
> --download-superlu_dist --download-hdf5=yes --with-hdf5-fortran-bindings 
> --with-debugging=0 COPTFLAGS="-O2 -march=native -mtune=native" 
> CXXOPTFLAGS="-O2 -march=native -mtune=native" FOPTFLAGS="-O2 -march=native 
> -mtune=native"
> 
> Is there any solution for this.
> 
> Thanks,
> 
> Danyang
> 
> 
> 



Re: [petsc-users] locally deploy PETSc

2023-01-19 Thread Jed Brown
You're probably looking for ./configure --prefix=/opt/petsc. It's documented in 
./configure --help. 

Tim Meehan  writes:

> Hi - I am trying to set up a local workstation for a few other developers who 
> need PETSc installed from the latest release. I figured that it would be 
> easiest for me to just clone the repository, as mentioned in the Quick Start.
>
> So, in /home/me/opt, I issued:
> git clone -b release https://gitlab.com/petsc/petsc.git petsc
> cd petsc
> ./configure
> make all check
>
> Things work fine, but I would like to install it in /opt/petsc, minus all of 
> the build derbris
>
> Is there some way to have './configure' do this?
> (I was actually thinking that the configure script was from GNU autotools or 
> something - but obviously not)
>
> Cheers,
> Tim


[petsc-users] locally deploy PETSc

2023-01-19 Thread Tim Meehan
Hi - I am trying to set up a local workstation for a few other developers who 
need PETSc installed from the latest release. I figured that it would be 
easiest for me to just clone the repository, as mentioned in the Quick Start.

So, in /home/me/opt, I issued:
git clone -b release https://gitlab.com/petsc/petsc.git petsc
cd petsc
./configure
make all check

Things work fine, but I would like to install it in /opt/petsc, minus all of 
the build derbris

Is there some way to have './configure' do this?
(I was actually thinking that the configure script was from GNU autotools or 
something - but obviously not)

Cheers,
Tim


[petsc-users] Interpreting Redistribution SF

2023-01-19 Thread Nicholas Arnold-Medabalimi
Hi Petsc Users

I'm working with a distribution start forest generated by
DMPlexDistribute and PetscSFBcast and Reduce to move data between the
initial distribution and the distribution generated by DMPlex Distribute.

I'm trying to debug some values that aren't being copied properly and
wanted to verify I understand how a redistribution SF works compared with a
SF that describes overlapped points.

  [0] 0 <- (0,7) point 0 on the distributed plex is point 7 on process
0 on the initial distribution
  [0] 1 <- (0,8) point 1 on the distributed plex is point 8 on process
0 on the initial distribution
  [0] 2 <- (0,9)
  [0] 3 <- (0,10)
  [0] 4 <- (0,11)

  [1] 0 <- (1,0) point 0 on the distributed plex is point 0 on process
1 on the initial distribution
  [1] 1 <- (1,1)
  [1] 2 <- (1,2)
  [1] 3 <- (0,0) point 3 on the distributed plex is point 0 on process
0 on the initial distribution
  [1] 4 <- (0,1)
  [1] 5 <- (0,2)

 my confusion I think is how does the distributionSF inform of what cells
will be leafs on the distribution?


Sincerely
Nicholas

-- 
Nicholas Arnold-Medabalimi

Ph.D. Candidate
Computational Aeroscience Lab
University of Michigan


Re: [petsc-users] multi GPU partitions have very different memory usage

2023-01-19 Thread Mark Adams
On Wed, Jan 18, 2023 at 6:03 PM Mark Lohry  wrote:

> Thanks Mark, I'll try the kokkos bit. Any other suggestions for minimizing
> memory besides the obvious use less levels?
>
> Unfortunately Jacobi does poorly compared to ILU on these systems.
>
> I'm seeing grid complexity 1.48 and operator complexity 1.75 with
> pc_gamg_square_graph 0, and 1.15/1.25 with it at 1.
>

That looks good. Use 1.


> Additionally the convergence rate is pretty healthy with 5 gmres+asm
> smooths but very bad with 5 Richardson+asm.
>
>
Yea, it needs to be damped and GMRES does that automatically.


>
> On Wed, Jan 18, 2023, 4:48 PM Mark Adams  wrote:
>
>> cusparse matrix triple product takes a lot of memory. We usually use
>> Kokkos, configured with TPL turned off.
>>
>> If you have a complex problem different parts of the domain can coarsen
>> at different rates.
>> Jacobi instead of asm will save a fair amount od memory.
>> If you run with -ksp_view you will see operator/matrix complexity from
>> GAMG. These should be < 1.5,
>>
>> Mark
>>
>> On Wed, Jan 18, 2023 at 3:42 PM Mark Lohry  wrote:
>>
>>> With asm I see a range of 8GB-13GB, slightly smaller ratio but that
>>> probably explains it (does this still seem like a lot of memory to you for
>>> the problem size?)
>>>
>>> In general I don't have the same number of blocks per row, so I suppose
>>> it makes sense there's some memory imbalance.
>>>
>>>
>>>
>>> On Wed, Jan 18, 2023 at 3:35 PM Mark Adams  wrote:
>>>
 Can your problem have load imbalance?

 You might try '-pc_type asm' (and/or jacobi) to see your baseline load
 imbalance.
 GAMG can add some load imbalance but start by getting a baseline.

 Mark

 On Wed, Jan 18, 2023 at 2:54 PM Mark Lohry  wrote:

> Q0) does -memory_view trace GPU memory as well, or is there another
> method to query the peak device memory allocation?
>
> Q1) I'm loading a aijcusparse matrix with MatLoad, and running with
> -ksp_type fgmres -pc_type gamg -mg_levels_pc_type asm with mat info
> 27,142,948 rows and cols, bs=4, total nonzeros 759,709,392. Using 8 ranks
> on 8x80GB GPUs, and during the setup phase before crashing with
> CUSPARSE_STATUS_INSUFFICIENT_RESOURCES nvidia-smi shows the below pasted
> content.
>
> GPU memory usage spanning from 36GB-50GB but with one rank at 77GB. Is
> this expected? Do I need to manually repartition this somehow?
>
> Thanks,
> Mark
>
>
>
> +-+
>
> | Processes:
>|
>
> |  GPU   GI   CIPID   Type   Process name  GPU
> Memory |
>
> |ID   ID
> Usage  |
>
>
> |=|
>
> |0   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |0   N/A  N/A   1696543  C   ./petsc_solver_test
> 38407MiB |
>
> |0   N/A  N/A   1696544  C   ./petsc_solver_test
> 467MiB |
>
> |0   N/A  N/A   1696545  C   ./petsc_solver_test
> 467MiB |
>
> |0   N/A  N/A   1696546  C   ./petsc_solver_test
> 467MiB |
>
> |0   N/A  N/A   1696548  C   ./petsc_solver_test
> 467MiB |
>
> |0   N/A  N/A   1696550  C   ./petsc_solver_test
> 471MiB |
>
> |0   N/A  N/A   1696551  C   ./petsc_solver_test
> 467MiB |
>
> |0   N/A  N/A   1696552  C   ./petsc_solver_test
> 467MiB |
>
> |1   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |1   N/A  N/A   1696544  C   ./petsc_solver_test
> 35849MiB |
>
> |2   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |2   N/A  N/A   1696545  C   ./petsc_solver_test
> 36719MiB |
>
> |3   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |3   N/A  N/A   1696546  C   ./petsc_solver_test
> 37343MiB |
>
> |4   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |4   N/A  N/A   1696548  C   ./petsc_solver_test
> 36935MiB |
>
> |5   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |5   N/A  N/A   1696550  C   ./petsc_solver_test
> 49953MiB |
>
> |6   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |6   N/A  N/A   1696551  C   ./petsc_solver_test
> 47693MiB |
>
> |7   N/A  N/A   1630309  C
> nvidia-cuda-mps-server 27MiB |
>
> |7   N/A  N/A   1696552  C   ./petsc_solver_test
> 77331MiB |
>
>
>