Re: [petsc-dev] Build error with zlib

2024-01-19 Thread Satish Balay via petsc-dev
On Fri, 19 Jan 2024, Satish Balay via petsc-dev wrote:

> > 
> > Unable to download package ZLIB from:
> > http://ftp.mcs.anl.gov/pub/petsc/externalpackages/zlib-1.2.11.tar.gz
> > 
> 
> Hm - this works directly - but not from python? I'll have to check with our 
> admins.

This now works for me. Can you recheck?

Satish


Re: [petsc-dev] Build error with zlib

2024-01-19 Thread Satish Balay via petsc-dev
On Fri, 19 Jan 2024, Adrian Croucher wrote:

> hi
> 
> I've just started having errors building PETSc on both my Github CI pipeline
> and on another machine, as a result of zlib failing to download. (I'm using
> the download-zlib option.) The error is:
> 
> UNABLE to CONFIGURE with GIVEN OPTIONS    (see configure.log for details):
> Error during download/extract/detection of ZLIB:
> Unable to download package ZLIB from: http://www.zlib.net/zlib-1.2.11.tar.gz
> 
> ...
> 
> Unable to download package ZLIB from:
> http://ftp.mcs.anl.gov/pub/petsc/externalpackages/zlib-1.2.11.tar.gz
> 

Hm - this works directly - but not from python? I'll have to check with our 
admins.

You can try the alternative:

--download-zlib=https://ftp.mcs.anl.gov/pub/petsc/externalpackages/zlib-1.2.11.tar.gz

> ...
> 
> Unable to download package ZLIB from:
> ftp://ftp.mcs.anl.gov/pub/petsc/externalpackages/zlib-1.2.11.tar.gz
> 
> ...
> 
> Unable to download package ZLIB from:
> https://www.mcs.anl.gov/petsc/mirror/externalpackages/zlib-1.2.11.tar.gz
> 
> When I test these URLs, the first one from www.zlib.net does indeed give a
> 404. But the other ones are valid. I assumed that it's trying these URLs in
> sequence until it finds one that works?
> 
> I am using an older version of PETSc (3.15.5) as there is a PETSc bug in more
> recent versions that affects my project and means I can't upgrade until it is
> resolved.

It would be good if this is resolved - and you can upgrade. Is there are report 
on this issue [perhaps at https://gitlab.com/petsc/petsc/-/issues]?

Satish

> 
> Regards, Adrian
> 
> 


Re: [petsc-dev] Slack Workspace

2023-12-05 Thread Satish Balay via petsc-dev
We are using discord:

https://lists.mcs.anl.gov/pipermail/petsc-users/2023-July/049115.html

Satish

On Tue, 5 Dec 2023, Escobedo, Andres via petsc-dev wrote:

> Hello,
> 
> 
> My name in Andy, I am a masters student in the computational fluid dynamics 
> group at the University of British Columbia Okanagan. I was hoping you could 
> please whitelist this email address so that I could join the PETSc slack 
> workspace.
> 
> 
> Regards,
> 
> 
> Andrés Escobedo (44365154)
> 
> Masters of Applied Science Student
> The University of British Columbia | Okanagan Campus | Syilx Okanagan Nation 
> Territory
> 
> aesco...@mail.ubc.ca
> 

Re: [petsc-dev] Petsc compilation issues with Xcode 15 on macOS Sonoma

2023-11-13 Thread Satish Balay via petsc-dev
Sounds like the issue is with using findPetsc - after petsc is installed [and 
petsc install went fine?]

I think the current recommendation is to use pkg-config file interface via 
cmake -instead of findPetsc

And the following works for me..

./configure  COPTFLAGS="-O3 -g" CXXOPTFLAGS="-O3 -g" FOPTFLAGS="-O3 -g" \
--download-openmpi --with-64-bit-indices --download-metis --download-parmetis \
--download-suitesparse --download-superlu_dist --download-hypre 
--download-scalapack --download-mumps \
--with-debugging=0 --with-clean

A couple of notes:

- we recommend not using 'sudo' when building PETSc [and build/use petsc from 
user account. one can also install in a system-wide location using a user 
account]
- not use --with-cxx-dialect - and let configure determine it.
- petsc does not use boost [its there for trilinos - but that part is broken]

Satish

On Mon, 13 Nov 2023, Barry Smith wrote:

> 
>   Please send configure.log (best to petsc-ma...@mcs.anl.gov).
> 
> 
> > On Nov 13, 2023, at 11:41 AM, Abhinav Singh  
> > wrote:
> > 
> > Dear Petsc devs,
> > 
> > I have been having issues lately when compiling petsc on MacOS with various 
> > libraries. This is probably due to the updated linker in MacOS and 
> > unfortunately the newer operating systems do not allow using older 
> > toolchains.
> > 
> > on arm64, only 3.20 and up can be configured
> > 
> > 
> > My configure command usually looks like this:
> > 'sudo ./configure  COPTFLAGS=-O3 -g CXXOPTFLAGS=-O3 -g FOPTFLAGS=-O3 -g 
> > --with-cxx-dialect=C++11 --with-mpi-dir=/opt/openfpm/dep_clang/MPI 
> > --with-64-bit-indices  --with-parmetis-dir=/opt/openfpm/dep_clang/PARMETIS 
> > --with-metis-dir=/opt/openfpm/dep_clang/METIS --with-boost=yes 
> > --with-boost-dir=/opt/openfpm/dep_clang/BOOST 
> > --with-suitesparse-dir=/opt/openfpm/dep_clang/SUITESPARSE 
> > --download-superlu_dist --download-hypre 
> > --prefix=/opt/openfpm/dep_clang/PETSC --download-scalapack --download-mumps 
> > --with-debugging=0 --with-clean
> > '
> > 
> > There are two main issues:
> > 
> > 1) Duplicate 'LC_Paths' when compiling code with apple clang and gfortran. 
> > If I enable download_scalapack, the the cmake findPetsc fails with the 
> > following error on both x86 and arm64:
> > """
> >  kind: "try_run-v1"
> > backtrace:
> >   - 
> > "/opt/homebrew/Cellar/cmake/3.27.7/share/cmake/Modules/Internal/CheckSourceRuns.cmake:93
> >  (try_run)"
> >   - 
> > "/opt/homebrew/Cellar/cmake/3.27.7/share/cmake/Modules/CheckCSourceRuns.cmake:52
> >  (cmake_check_source_runs)"
> >   - "cmake_modules/FindPackageMultipass.cmake:97 (check_c_source_runs)"
> >   - "cmake_modules/FindPETSc.cmake:284 (multipass_source_runs)"
> >   - "cmake_modules/FindPETSc.cmake:318 (petsc_test_runs)"
> >   - "CMakeLists.txt:69 (find_package)"
> > checks:
> >   - "Performing Test MULTIPASS_TEST_4_petsc_works_all"
> > directories:
> >   source: 
> > "/Users/absingh/openfpm_pdata/build/CMakeFiles/CMakeScratch/TryCompile-LJcmJB"
> >   binary: 
> > "/Users/absingh/openfpm_pdata/build/CMakeFiles/CMakeScratch/TryCompile-LJcmJB"
> > cmakeVariables:
> >   CMAKE_C_FLAGS: ""
> >   CMAKE_EXE_LINKER_FLAGS: ""
> >   CMAKE_MODULE_PATH: "/Users/absingh/openfpm_pdata/cmake_modules/"
> >   CMAKE_OSX_ARCHITECTURES: ""
> >   CMAKE_OSX_DEPLOYMENT_TARGET: ""
> >   CMAKE_OSX_SYSROOT: 
> > "/Library/Developer/CommandLineTools/SDKs/MacOSX14.0.sdk"
> > buildResult:
> >   variable: "MULTIPASS_TEST_4_petsc_works_all_COMPILED"
> >   cached: true
> >   stdout: |
> > Change Dir: 
> > '/Users/absingh/openfpm_pdata/build/CMakeFiles/CMakeScratch/TryCompile-LJcmJB'
> > 
> > Run Build Command(s): /opt/homebrew/Cellar/cmake/3.27.7/bin/cmake 
> > -E env VERBOSE=1 /usr/bin/make -f Makefile cmTC_e0165/fast
> > /Library/Developer/CommandLineTools/usr/bin/make  -f 
> > CMakeFiles/cmTC_e0165.dir/build.make CMakeFiles/cmTC_e0165.dir/build
> > Building C object CMakeFiles/cmTC_e0165.dir/src.c.o
> > /Library/Developer/CommandLineTools/usr/bin/cc 
> > -DMULTIPASS_TEST_4_petsc_works_all -I/opt/openfpm/dep_clang/PETSC/include 
> > -I/opt/openfpm/dep_clang/SUITESPARSE/include 
> > -I/opt/openfpm/dep_clang/PARMETIS/include 
> > -I/opt/openfpm/dep_clang/METIS/include 
> > -I/opt/openfpm/dep_clang/BOOST/include -I/opt/openfpm/dep_clang/MPI/include 
> > -arch arm64 -isysroot 
> > /Library/Developer/CommandLineTools/SDKs/MacOSX14.0.sdk -MD -MT 
> > CMakeFiles/cmTC_e0165.dir/src.c.o -MF CMakeFiles/cmTC_e0165.dir/src.c.o.d 
> > -o CMakeFiles/cmTC_e0165.dir/src.c.o -c 
> > /Users/absingh/openfpm_pdata/build/CMakeFiles/CMakeScratch/TryCompile-LJcmJB/src.c
> > Linking C executable cmTC_e0165
> > /opt/homebrew/Cellar/cmake/3.27.7/bin/cmake -E cmake_link_script 
> > CMakeFiles/cmTC_e0165.dir/link.txt --verbose=1
> > /Library/Developer/CommandLineTools/usr/bin/cc  -arch arm64 
> > -isysroot 

Re: [petsc-dev] Request to get added to Slack

2023-10-10 Thread Satish Balay via petsc-dev
Johann,

We are migrating to discord from slack:

https://lists.mcs.anl.gov/pipermail/petsc-users/2023-July/049115.html

[I guess docs need updating]

Satish

On Tue, 10 Oct 2023, Johann Rudi wrote:

> Hello,
> 
> I would like to join the Petsc Slack space, and on petsc.org the
> instructions say to email to this mailing list. (Apologies for spamming.)
> 
> best,
> Johann Rudi
> 



[petsc-dev] petsc (3.20) release plan for Sep/2023

2023-09-01 Thread Satish Balay via petsc-dev
With our current 6-month release cycle, its again time for another PETSc 
release.

For this release [3.20], lets work with the following dates:

- feature freeze: Sep 26 say 5PM EST
- release: Sep 28 say 5PM EST

"v3.20-release" milestone can be used with all MRs that are targeted for a 
merge before this release.

Thanks,
Satish



Re: [petsc-dev] PETSc optimization

2023-08-08 Thread Satish Balay via petsc-dev
Well in spack world - the idea is to get flags from spack and use it in petsc 
build. Its possible that there are issues in this implementation.

Something spack does is - it internally adds in flags (to the compiler - via 
its compiler wrapper) - that petsc configure doesn't see.


I see 'spack install cflags=-O3' is working - but not spack install cflags='-O3 
-g' [something to debug]

[there is also the alternative of adding flags to compiler spec file]

The other issue is mapping cflags from spack to CFLAGS/COPTFLAGS - in some 
cases petsc build requires some default CLFAGS to work - and overriding them 
from spack might cause issues. [again this part is not properly handed]..

Satish

On Tue, 8 Aug 2023, Liu Wei   AWE via petsc-dev wrote:

> Hi all
> 
> I am currently building a large software stack using Spack with PETSc 3.19 as 
> part of the dependency library.
> 
> Spotted the following message during the build process
> 
> Using default optimization C flags "-g -O". You might consider manually 
> setting optimal optimization flags for your system with
> COPTFLAGS="optimization flags" see config/examples/arch-*-opt.py for examples
> 
> Previously when we install PETSc manually, optimisation flags are enforced 
> via configure script e.g.
> COPTFLAGS="-g -O3 -march=native"
> CXXOPTFLAGS="-g -O3 -march=native"
> ...
> (or "-g -O3 -xhost" for intel compiler)
> 
> Whilst spack spec syntax allows compiler parameters via cflags/cxxflags/fflags
> https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/petsc/package.py
> the requirement is not obvious, especially in a long build process.
> 
> The question is should optimisation be amended at spack recipe stage or 
> should PETSc configure enforce some compiler optimisation (if 
> -with-debugging=0 is set) ?
> 
> Regards
> 
> Dr Wei Liu
> High Performance Computing
> To chat with me on Teams click 
> here
> T: +44 01189 856209
> M: wei@awe.co.uk
> AWE Aldermaston,
> Reading, Berkshire, RG7 4PR
> 
> The information in this email and in any attachment(s) is commercial in 
> confidence. If you are not the named addressee(s) or if you receive this 
> email in error then any distribution, copying or use of this communication or 
> the information in it is strictly prohibited. Please notify us immediately by 
> email at admin.internet(at)awe.co.uk, and then delete this message from your 
> computer. While attachments are virus checked, AWE plc does not accept any 
> liability in respect of any virus which is not detected. AWE Plc Registered 
> in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 
> 4PR
> 



Re: [petsc-dev] To join the Slack workspace

2023-07-10 Thread Satish Balay via petsc-dev


Check:

https://lists.mcs.anl.gov/pipermail/petsc-users/2023-July/049015.html

Satish


On Mon, 10 Jul 2023, Singh, Abhishek Kumar wrote:

> I am working as a Postdoc at Max Planck Institute for Plasma Physics. I have 
> started using PETSc as a tool for solving system of equations. I would like 
> to explore more about PETSc. I would like to join a slack workshop specially 
> dedicated to PETSc. Looking forward to hearing from you.
> 
> 
> Best reards,
> 
> Abhishek Kumar Singh
> 



Re: [petsc-dev] petsc4py doc problem

2023-06-27 Thread Satish Balay via petsc-dev

https://gitlab.com/petsc/petsc/-/merge_requests/6578/commits

commit 0898713fbecf5e265dbd1d072d2bffc3dcf92948
Author: Stefano Zampini 
Date:   Sat Jun 17 20:40:28 2023 +0200

petsc4py docs: enforce 79 characters lines
<<<

so perhaps:

diff --git a/src/binding/petsc4py/src/petsc4py/PETSc/Mat.pyx 
b/src/binding/petsc4py/src/petsc4py/PETSc/Mat.pyx
index 067d98e3b0b..c04f501c73b 100644
--- a/src/binding/petsc4py/src/petsc4py/PETSc/Mat.pyx
+++ b/src/binding/petsc4py/src/petsc4py/PETSc/Mat.pyx
@@ -4745,7 +4745,8 @@ cdef class Mat(Object):
 U : Mat
 The first dense rectangular matrix.
 c : Vec
-The sequential vector containing the diagonal of ``C``, or NULL 
for all ones.
+The sequential vector containing the diagonal of ``C``,
+or NULL for all ones.
 V : Mat
 The second dense rectangular matrix, or NULL for a copy of ``U``.
 
Satish


On Tue, 27 Jun 2023, Matthew Knepley wrote:

> I do not understand those docs at all:
> 
> Using PETSC inventory from
> file:///scratch/svcpetsc/glci-builds-stage2/VW-hbPim/0/petsc/petsc/public/html/objects.inv
> 
> Warning, treated as error:
> Line 9 for Mat.setLRCMats(self, A: Mat, U: Mat, c: Vec | None = None, V:
> Mat | None = None) too long.
> 
>   https://gitlab.com/petsc/petsc/-/jobs/4554950030
> 
> Can someone fix this? https://gitlab.com/petsc/petsc/-/merge_requests/6640
> 
>   Thanks,
> 
>  Matt
> 
> 



Re: [petsc-dev] building sphinx doc only

2023-06-15 Thread Satish Balay via petsc-dev
On Thu, 15 Jun 2023, Blaise Bourdin wrote:

> Hi,
> 
> I am trying to figure out why the doc page for PetscOptionsHeadBegin 
> https://petsc.org/release/manualpages/Sys/PetscOptionsHeadBegin/ is broken.

>>
#else
  /*MC  

   
PetscOptionsBegin - Begins a set of queries on the options database that 
are related and should be   
 ..
..
 
M*/
  #define PetscOptionsBegin(comm, prefix, mess, sec) \
<<

I suspect this formatting (extra spaces) is triggering the broken docs...

Satish


Re: [petsc-dev] building sphinx doc only

2023-06-15 Thread Satish Balay via petsc-dev
On Thu, 15 Jun 2023, Jacob Faibussowitsch wrote:

> > I am trying to figure out why the doc page for PetscOptionsHeadBegin 
> > https://petsc.org/release/manualpages/Sys/PetscOptionsHeadBegin/ is broken.
> 
> It's missing a Synopsis: section. See PetscOptionsEnd docstring.
> 
> > I do 
> > 
> > cd $PETSC_DIR/doc
> > make sphinxhtml
> > 
> > which takes ages and seems to also build the old style html documentation. 
> > Is this right?
> 
> I think you can just do 
> 
> $ cd ${PETSC_DIR}
> $ make docs
> 
> This has the added benefit of doing it all in a venv so you don’t pollute 
> your regular python install.


>
docs:
cd doc; ${OMAKE_SELF} sphinxhtml
<

i.e same as the above...

Satish



> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> 
> > On Jun 15, 2023, at 11:24, Blaise Bourdin  wrote:
> > 
> > Hi,
> > 
> > I am trying to figure out why the doc page for PetscOptionsHeadBegin 
> > https://petsc.org/release/manualpages/Sys/PetscOptionsHeadBegin/ is broken.
> > Following the instructions at 
> > https://petsc.org/release/developers/documentation/#developing-petsc-documentation
> >  
> > I do 
> > 
> > cd $PETSC_DIR/doc
> > make sphinxhtml
> > 
> > which takes ages and seems to also build the old style html documentation. 
> > Is this right?
> > 
> > Regards,
> > Blaise
> > 
> > — 
> > Canada Research Chair in Mathematical and Computational Aspects of Solid 
> > Mechanics (Tier 1)
> > Professor, Department of Mathematics & Statistics
> > Hamilton Hall room 409A, McMaster University
> > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> > https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> > 
> 


Re: [petsc-dev] building sphinx doc only

2023-06-15 Thread Satish Balay via petsc-dev
On Thu, 15 Jun 2023, Blaise Bourdin wrote:

> Hi,
> 
> I am trying to figure out why the doc page for PetscOptionsHeadBegin 
> https://petsc.org/release/manualpages/Sys/PetscOptionsHeadBegin/ is broken.
> Following the instructions at 
> https://petsc.org/release/developers/documentation/#developing-petsc-documentation
>  
> I do 
> 
> cd $PETSC_DIR/doc
> make sphinxhtml
> 
> which takes ages and seems to also build the old style html documentation. Is 
> this right?

doc build is a complex multi step process - and takes about 30+min in CI

https://gitlab.com/petsc/petsc/-/jobs/4475363877

Satish

> 
> Regards,
> Blaise
> 
> — 
> Canada Research Chair in Mathematical and Computational Aspects of Solid 
> Mechanics (Tier 1)
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 


Re: [petsc-dev] GCov CI problem

2023-06-14 Thread Satish Balay via petsc-dev
not ok vec_pf_impls_string_tests-ex1_1 # Error code: 65
#   [0]PETSC ERROR: - Error Message 
--
#   [0]PETSC ERROR: Unable to open file
#   [0]PETSC ERROR: Unable to open dynamic library:
# /tmp/svcpetsc/libpetscdlib.so

which part of code is creating and loading a library from this location?


src/vec/pf/impls/string/cstring.c:  PetscCall(PetscSNPrintf(task, 
PETSC_STATIC_ARRAY_LENGTH(task), "cd %s ; if [ ! -d ${USERNAME} ]; then mkdir 
${USERNAME}; fi ; cd ${USERNAME} ; rm -f makefile petscdlib.* ; cp -f 
${PETSC_DIR}/src/vec/pf/impls/string/makefile ./makefile ; ${PETSC_MAKE} NIN=%" 
PetscInt_FMT " NOUT=%" PetscInt_FMT " -f makefile libpetscdlib 
STRINGFUNCTION=\"%s\"  %s ;  sync\n", tmp, pf->dimin, pf->dimout, string, 
keeptmpfiles ? "; rm -f makefile petscdlib.c" : ""));


a library should be doing such a thing - in a common location [however useful 
such a feature is :(]

Satish

On Wed, 14 Jun 2023, Matthew Knepley wrote:

> https://gitlab.com/petsc/petsc/-/jobs/4474550390
> 
> It looks like some gcov stuff is failing in the CI, unrelated to this MR.
> 
>   Thanks,
> 
>  Matt
> 
> 



Re: [petsc-dev] So CFLAGS no longer works!!!! Major crisis

2023-04-26 Thread Satish Balay via petsc-dev
Not sure how can add deprecation message here - so adding this message to 
'changes' doc

https://gitlab.com/petsc/petsc/-/merge_requests/6382

Satish

On Wed, 26 Apr 2023, Satish Balay via petsc-dev wrote:

> On Wed, 26 Apr 2023, Barry Smith wrote:
> 
> > 
> > 
> >   Urg, so user makefiles that worked for 25+ years suddenly don't work and 
> > that is ok? No deprecation message as Jed would have liked?
> 
> I think I raised this issue when 'CFLAGS = ' stuff was removed from all 
> makefiles.
> 
> You can view this change as a necessary fix for above cleanup change..
> 
> Don't know if there is gnumake syntax where the reset in 
> PETSC_ARCH/lib/conf/petscvaribales can selectively reset only on env variable 
> - but not prior make variable
> 
> Right now the fix is to move the 'CFLAGS' line after the the 'include' line
> 
> Satish
> 
> > 
> >   So it is from 
> > # Avoid picking CFLAGS etc from env - but support 'make CFLAGS=-Werror' 
> > etc..
> > self.addMakeMacro('CFLAGS','')
> > self.addMakeMacro('CPPFLAGS','')
> > self.addMakeMacro('CXXFLAGS','')
> > self.addMakeMacro('CXXPPFLAGS','')
> > self.addMakeMacro('FFLAGS','')
> > self.addMakeMacro('FPPFLAGS','')
> > self.addMakeMacro('CUDAFLAGS','')
> > self.addMakeMacro('CUDAPPFLAGS','')
> > self.addMakeMacro('HIPFLAGS','')
> > self.addMakeMacro('HIPPPFLAGS','')
> > self.addMakeMacro('SYCLFLAGS','')
> > self.addMakeMacro('SYCLPPFLAGS','')
> > self.addMakeMacro('LDFLAGS','')
> > 
> > What was "from env" suppose to mean? You mean environmental variables? 
> > 
> > Is there some other way of not automatically using the environmental 
> > variables that doesn't break 25 years of user makefile? Since these things 
> > all require GNUmake is there some GNUmake-ish way to hand this without 
> > breaking current makefiles?
> > 
> > 
> > 
> > 
> > > On Apr 26, 2023, at 5:34 PM, Satish Balay  wrote:
> > > 
> > > Well we wanted to always have  CFLAGS initialized by configure [to ignore 
> > > stuff from env].
> > > 
> > > So now - if we are setting in makefile - it has to be set after this 
> > > default is set - i.e after the line:
> > > 
> > > include ${PETSC_DIR}/lib/petsc/conf/variables
> > > 
> > > Or do:
> > > 
> > > make CFLAGS=garbase ex1
> > > 
> > > There might be a different bug lurking here..
> > > 
> > > -PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${FLAGS} 
> > > ${CPPFLAGS}
> > > +PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${CFLAGS} 
> > > ${CPPFLAGS}
> > > 
> > > Satish
> > > 
> > > On Wed, 26 Apr 2023, Barry Smith wrote:
> > > 
> > >> 
> > >> $ make ex1
> > >> mpicc -Wl,-bind_at_load -Wl,-multiply_defined,suppress 
> > >> -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs 
> > >> -Wl,-search_paths_first -Wl,-no_compact_unwind  -Wall -Wwrite-strings 
> > >> -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow 
> > >> -fvisibility=hidden -g3 -O0  -I/Users/barrysmith/Src/petsc/include 
> > >> -I/Users/barrysmith/Src/petsc/arch-release/include -I/opt/X11/include
> > >>   ex1.c  -Wl,-rpath,/Users/barrysmith/Src/petsc/arch-release/lib 
> > >> -L/Users/barrysmith/Src/petsc/arch-release/lib -Wl,-rpath,/opt/X11/lib 
> > >> -L/opt/X11/lib 
> > >> -Wl,-rpath,/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> > >> -L/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> > >> -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
> > >>  
> > >> -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
> > >>  -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> > >> -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> > >> -Wl,-rpath,/opt/homebrew/Cellar/gcc/12
 .2
>  .0/lib/g
>  cc/current -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current -lpetsc 
> -llapack -lblas -lX11 -lmpifort -lmpi -lpmpi -lgfortran -lemutls_w -lquadmath 
> -lstdc++ -lquadmath -o ex1
> > >> ~/Src/petsc/src/snes/tutorials (release *=) arch-release
> > >> $ more makefile
> > >> -include ../../../petscdir.mk
> > >> 
> > >> MANSEC   = SNES
> > >> EXAMPLESMATLAB   = ex5m.m ex29view.m
> > >> DIRS = ex10d network
> > >> CLEANFILES   = ex5f90t
> > >> CFLAGS = garbage
> > >> 
> > >> 
> > >> The new stuff in variables PETSC_COMPILE_SINGLE= ${PCC} -o $*.o -c 
> > >> ${PCC_FLAGS} ${${CLANGUAGE}FLAGS} ${CCPPFLAGS}  with the recursive use 
> > >> of $ doesn't work? This is on my Mac but Get also has the problem on 
> > >> Polaris
> > 
> 



Re: [petsc-dev] So CFLAGS no longer works!!!! Major crisis

2023-04-26 Thread Satish Balay via petsc-dev
On Wed, 26 Apr 2023, Barry Smith wrote:

> 
> 
>   Urg, so user makefiles that worked for 25+ years suddenly don't work and 
> that is ok? No deprecation message as Jed would have liked?

I think I raised this issue when 'CFLAGS = ' stuff was removed from all 
makefiles.

You can view this change as a necessary fix for above cleanup change..

Don't know if there is gnumake syntax where the reset in 
PETSC_ARCH/lib/conf/petscvaribales can selectively reset only on env variable - 
but not prior make variable

Right now the fix is to move the 'CFLAGS' line after the the 'include' line

Satish

> 
>   So it is from 
> # Avoid picking CFLAGS etc from env - but support 'make CFLAGS=-Werror' 
> etc..
> self.addMakeMacro('CFLAGS','')
> self.addMakeMacro('CPPFLAGS','')
> self.addMakeMacro('CXXFLAGS','')
> self.addMakeMacro('CXXPPFLAGS','')
> self.addMakeMacro('FFLAGS','')
> self.addMakeMacro('FPPFLAGS','')
> self.addMakeMacro('CUDAFLAGS','')
> self.addMakeMacro('CUDAPPFLAGS','')
> self.addMakeMacro('HIPFLAGS','')
> self.addMakeMacro('HIPPPFLAGS','')
> self.addMakeMacro('SYCLFLAGS','')
> self.addMakeMacro('SYCLPPFLAGS','')
> self.addMakeMacro('LDFLAGS','')
> 
> What was "from env" suppose to mean? You mean environmental variables? 
> 
> Is there some other way of not automatically using the environmental 
> variables that doesn't break 25 years of user makefile? Since these things 
> all require GNUmake is there some GNUmake-ish way to hand this without 
> breaking current makefiles?
> 
> 
> 
> 
> > On Apr 26, 2023, at 5:34 PM, Satish Balay  wrote:
> > 
> > Well we wanted to always have  CFLAGS initialized by configure [to ignore 
> > stuff from env].
> > 
> > So now - if we are setting in makefile - it has to be set after this 
> > default is set - i.e after the line:
> > 
> > include ${PETSC_DIR}/lib/petsc/conf/variables
> > 
> > Or do:
> > 
> > make CFLAGS=garbase ex1
> > 
> > There might be a different bug lurking here..
> > 
> > -PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${FLAGS} ${CPPFLAGS}
> > +PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${CFLAGS} 
> > ${CPPFLAGS}
> > 
> > Satish
> > 
> > On Wed, 26 Apr 2023, Barry Smith wrote:
> > 
> >> 
> >> $ make ex1
> >> mpicc -Wl,-bind_at_load -Wl,-multiply_defined,suppress 
> >> -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs 
> >> -Wl,-search_paths_first -Wl,-no_compact_unwind  -Wall -Wwrite-strings 
> >> -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow 
> >> -fvisibility=hidden -g3 -O0  -I/Users/barrysmith/Src/petsc/include 
> >> -I/Users/barrysmith/Src/petsc/arch-release/include -I/opt/X11/include  
> >> ex1.c  -Wl,-rpath,/Users/barrysmith/Src/petsc/arch-release/lib 
> >> -L/Users/barrysmith/Src/petsc/arch-release/lib -Wl,-rpath,/opt/X11/lib 
> >> -L/opt/X11/lib 
> >> -Wl,-rpath,/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> >> -L/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> >> -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
> >>  
> >> -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
> >>  -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> >> -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> >> -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2
 .0/lib/g
 cc/current -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current -lpetsc -llapack 
-lblas -lX11 -lmpifort -lmpi -lpmpi -lgfortran -lemutls_w -lquadmath -lstdc++ 
-lquadmath -o ex1
> >> ~/Src/petsc/src/snes/tutorials (release *=) arch-release
> >> $ more makefile
> >> -include ../../../petscdir.mk
> >> 
> >> MANSEC   = SNES
> >> EXAMPLESMATLAB   = ex5m.m ex29view.m
> >> DIRS = ex10d network
> >> CLEANFILES   = ex5f90t
> >> CFLAGS = garbage
> >> 
> >> 
> >> The new stuff in variables PETSC_COMPILE_SINGLE= ${PCC} -o $*.o -c 
> >> ${PCC_FLAGS} ${${CLANGUAGE}FLAGS} ${CCPPFLAGS}  with the recursive use of 
> >> $ doesn't work? This is on my Mac but Get also has the problem on Polaris
> 


Re: [petsc-dev] So CFLAGS no longer works!!!! Major crisis

2023-04-26 Thread Satish Balay via petsc-dev
Also note:

I think we previously handled this by always having this in each makefile 
[without a configure default]

CFLAGS = 

But that format was removed..

Satish

On Wed, 26 Apr 2023, Satish Balay via petsc-dev wrote:

> Well we wanted to always have  CFLAGS initialized by configure [to ignore 
> stuff from env].
> 
> So now - if we are setting in makefile - it has to be set after this default 
> is set - i.e after the line:
> 
> include ${PETSC_DIR}/lib/petsc/conf/variables
> 
> Or do:
> 
> make CFLAGS=garbase ex1
> 
> There might be a different bug lurking here..
> 
> -PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${FLAGS} ${CPPFLAGS}
> +PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${CFLAGS} ${CPPFLAGS}
> 
> Satish
> 
> On Wed, 26 Apr 2023, Barry Smith wrote:
> 
> > 
> > $ make ex1
> > mpicc -Wl,-bind_at_load -Wl,-multiply_defined,suppress 
> > -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs 
> > -Wl,-search_paths_first -Wl,-no_compact_unwind  -Wall -Wwrite-strings 
> > -Wno-unknown-pragmas -Wno-lto-type-mismatch -Wno-stringop-overflow 
> > -fvisibility=hidden -g3 -O0  -I/Users/barrysmith/Src/petsc/include 
> > -I/Users/barrysmith/Src/petsc/arch-release/include -I/opt/X11/include  
> > ex1.c  -Wl,-rpath,/Users/barrysmith/Src/petsc/arch-release/lib 
> > -L/Users/barrysmith/Src/petsc/arch-release/lib -Wl,-rpath,/opt/X11/lib 
> > -L/opt/X11/lib 
> > -Wl,-rpath,/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> > -L/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> > -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
> >  
> > -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
> >  -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> > -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> > -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.
 0/
>  lib/gcc/
>  current -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current -lpetsc -llapack 
> -lblas -lX11 -lmpifort -lmpi -lpmpi -lgfortran -lemutls_w -lquadmath -lstdc++ 
> -lquadmath -o ex1
> > ~/Src/petsc/src/snes/tutorials (release *=) arch-release
> > $ more makefile
> > -include ../../../petscdir.mk
> > 
> > MANSEC   = SNES
> > EXAMPLESMATLAB   = ex5m.m ex29view.m
> > DIRS = ex10d network
> > CLEANFILES   = ex5f90t
> > CFLAGS = garbage
> > 
> > 
> > The new stuff in variables PETSC_COMPILE_SINGLE= ${PCC} -o $*.o -c 
> > ${PCC_FLAGS} ${${CLANGUAGE}FLAGS} ${CCPPFLAGS}  with the recursive use of $ 
> > doesn't work? This is on my Mac but Get also has the problem on Polaris
> 



Re: [petsc-dev] So CFLAGS no longer works!!!! Major crisis

2023-04-26 Thread Satish Balay via petsc-dev
Well we wanted to always have  CFLAGS initialized by configure [to ignore stuff 
from env].

So now - if we are setting in makefile - it has to be set after this default is 
set - i.e after the line:

include ${PETSC_DIR}/lib/petsc/conf/variables

Or do:

make CFLAGS=garbase ex1

There might be a different bug lurking here..

-PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${FLAGS} ${CPPFLAGS}
+PETSC_CCOMPILE_SINGLE   = ${CC} -o $*.o -c ${CC_FLAGS} ${CFLAGS} ${CPPFLAGS}

Satish

On Wed, 26 Apr 2023, Barry Smith wrote:

> 
> $ make ex1
> mpicc -Wl,-bind_at_load -Wl,-multiply_defined,suppress -Wl,-multiply_defined 
> -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first 
> -Wl,-no_compact_unwind  -Wall -Wwrite-strings -Wno-unknown-pragmas 
> -Wno-lto-type-mismatch -Wno-stringop-overflow -fvisibility=hidden -g3 -O0  
> -I/Users/barrysmith/Src/petsc/include 
> -I/Users/barrysmith/Src/petsc/arch-release/include -I/opt/X11/include  
> ex1.c  -Wl,-rpath,/Users/barrysmith/Src/petsc/arch-release/lib 
> -L/Users/barrysmith/Src/petsc/arch-release/lib -Wl,-rpath,/opt/X11/lib 
> -L/opt/X11/lib -Wl,-rpath,/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> -L/Users/barrysmith/soft/mpich-clang-gfortran-opt/lib 
> -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
>  
> -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc/aarch64-apple-darwin22/12
>  -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current/gcc 
> -Wl,-rpath,/opt/homebrew/Cellar/gcc/12.2.0/
 lib/gcc/
 current -L/opt/homebrew/Cellar/gcc/12.2.0/lib/gcc/current -lpetsc -llapack 
-lblas -lX11 -lmpifort -lmpi -lpmpi -lgfortran -lemutls_w -lquadmath -lstdc++ 
-lquadmath -o ex1
> ~/Src/petsc/src/snes/tutorials (release *=) arch-release
> $ more makefile
> -include ../../../petscdir.mk
> 
> MANSEC   = SNES
> EXAMPLESMATLAB   = ex5m.m ex29view.m
> DIRS = ex10d network
> CLEANFILES   = ex5f90t
> CFLAGS = garbage
> 
> 
> The new stuff in variables PETSC_COMPILE_SINGLE= ${PCC} -o $*.o -c 
> ${PCC_FLAGS} ${${CLANGUAGE}FLAGS} ${CCPPFLAGS}  with the recursive use of $ 
> doesn't work? This is on my Mac but Get also has the problem on Polaris


Re: [petsc-dev] Is the petsc4py build broken?

2023-04-16 Thread Satish Balay via petsc-dev
On Sun, 16 Apr 2023, Matthew Knepley wrote:

> On Sun, Apr 16, 2023 at 12:34 AM Pierre Jolivet 
> wrote:
> 
> > petsc4py build is not broken, but maybe it is not future-proof and can’t
> > handle Cython 3.0.0b2 (or there is a regression in Cython).
> > Could you downgrade your Cython (to an actual release and not a beta) and
> > see if the error persists?
> >
> 
> You are correct. However, I think this is a bug in the petsc4py install
> now. I had deleted my Cython, and it was fetched automatically.

Strange. I'm getting:

[balay@pj01 petsc]$ python3 -m pip install --user cython
Collecting cython
  Using cached 
Cython-0.29.34-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl
 (1.9 MB)
Installing collected packages: cython
Successfully installed cython-0.29.34

> I think we need to change
> 
>   src/binding/petsc4py/conf/confpetsc.py:133
> 
> I just changed '>=' to '==', but maybe we just put a '< 3' since it is
> incompatible with the 3.0.0.b2 which setuptools
> is automatically bringing down.

Can you create an MR [to release]? I think Lisandro would have to check on this 
[I guess support both CYTHON_MIN and CYTHON_MAX]

Satish

> 
>   Thanks,
> 
>     Matt
> 
> 
> > Thanks,
> > Pierre
> >
> > > On 16 Apr 2023, at 12:25 AM, Satish Balay via petsc-dev <
> > petsc-dev@mcs.anl.gov> wrote:
> > >
> > > Works for me with latest main - so I'm not sure whats going on here..
> > >
> > > Is this reproducible in a clean clone? Also - are you using mpi4py in
> > this build?
> > > https://gitlab.com/petsc/petsc/-/issues/1359
> > > [this issue looks different though..]
> > >
> > > Satish
> > >
> > > --
> > >
> > > *** Building petsc4py ***
> > > running build
> > > running build_src
> > > using Cython 0.29.33
> > > cythonizing 'petsc4py/PETSc.pyx' -> 'petsc4py/PETSc.c'
> > > running build_py
> > > creating build
> > > creating build/lib.linux-x86_64-cpython-311
> > > creating build/lib.linux-x86_64-cpython-311/petsc4py
> > > copying src/petsc4py/PETSc.py ->
> > build/lib.linux-x86_64-cpython-311/petsc4py
> > > copying src/petsc4py/__main__.py ->
> > build/lib.linux-x86_64-cpython-311/petsc4py
> > > copying src/petsc4py/__init__.py ->
> > build/lib.linux-x86_64-cpython-311/petsc4py
> > > creating build/lib.linux-x86_64-cpython-311/petsc4py/lib
> > > 
> > >
> > >
> > > On Sat, 15 Apr 2023, Matthew Knepley wrote:
> > >
> > >> I get
> > >>
> > >> *** Building petsc4py ***
> > >> running build
> > >> running build_src
> > >> removing Cython 0.29.30 from sys.modules
> > >> fetching build requirement 'Cython >= 0.29.32'
> > >> Searching for Cython>=0.29.32
> > >> Best match: Cython 3.0.0b2
> > >> Processing Cython-3.0.0b2-py3.8-macosx-10.9-x86_64.egg
> > >>
> > >> Using
> > >>
> > /PETSc3/petsc/petsc-dev/src/binding/petsc4py/.eggs/Cython-3.0.0b2-py3.8-macosx-10.9-x86_64.egg
> > >>
> > >> using Cython 3.0.0b2
> > >> cythonizing 'petsc4py/PETSc.pyx' -> 'petsc4py/PETSc.c'
> > >> /PETSc3/petsc/petsc-dev/src/binding/petsc4py/conf/cythonize.py: No such
> > >> file or directory: 'petsc4py/PETSc.pyx'
> > >> error: Cython failure: 'petsc4py/PETSc.pyx' -> 'petsc4py/PETSc.c'
> > >> **ERROR*
> > >> Error building petsc4py.
> > >>
> > >>  Thanks,
> > >>
> > >> Matt
> > >>
> > >>
> > >
> >
> >
> 
> 


Re: [petsc-dev] Is the petsc4py build broken?

2023-04-15 Thread Satish Balay via petsc-dev
Works for me with latest main - so I'm not sure whats going on here..

Is this reproducible in a clean clone? Also - are you using mpi4py in this 
build?
https://gitlab.com/petsc/petsc/-/issues/1359
[this issue looks different though..]

Satish

--

*** Building petsc4py ***
running build
running build_src
using Cython 0.29.33
cythonizing 'petsc4py/PETSc.pyx' -> 'petsc4py/PETSc.c'
running build_py
creating build
creating build/lib.linux-x86_64-cpython-311
creating build/lib.linux-x86_64-cpython-311/petsc4py
copying src/petsc4py/PETSc.py -> build/lib.linux-x86_64-cpython-311/petsc4py
copying src/petsc4py/__main__.py -> build/lib.linux-x86_64-cpython-311/petsc4py
copying src/petsc4py/__init__.py -> build/lib.linux-x86_64-cpython-311/petsc4py
creating build/lib.linux-x86_64-cpython-311/petsc4py/lib



On Sat, 15 Apr 2023, Matthew Knepley wrote:

> I get
> 
> *** Building petsc4py ***
> running build
> running build_src
> removing Cython 0.29.30 from sys.modules
> fetching build requirement 'Cython >= 0.29.32'
> Searching for Cython>=0.29.32
> Best match: Cython 3.0.0b2
> Processing Cython-3.0.0b2-py3.8-macosx-10.9-x86_64.egg
> 
> Using
> /PETSc3/petsc/petsc-dev/src/binding/petsc4py/.eggs/Cython-3.0.0b2-py3.8-macosx-10.9-x86_64.egg
> 
> using Cython 3.0.0b2
> cythonizing 'petsc4py/PETSc.pyx' -> 'petsc4py/PETSc.c'
> /PETSc3/petsc/petsc-dev/src/binding/petsc4py/conf/cythonize.py: No such
> file or directory: 'petsc4py/PETSc.pyx'
> error: Cython failure: 'petsc4py/PETSc.pyx' -> 'petsc4py/PETSc.c'
> **ERROR*
> Error building petsc4py.
> 
>   Thanks,
> 
>  Matt
> 
> 



Re: [petsc-dev] petsc release plan for Mar/2023

2023-03-26 Thread Satish Balay via petsc-dev
A reminder, the feature freeze for the upcoming release is in a couple of days.

Satish

On Tue, 28 Feb 2023, Satish Balay wrote:

> Its time for another PETSc release - due end of March.
> 
> For this release [3.19], lets work with the following dates:
> 
> - feature freeze: March 28 say 5PM EST
> - release: March 30 say 5PM EST
> 
> Merges after freeze should contain only fixes that would normally be 
> acceptable to "release" work-flow.
> 
> I've created a new milestone 'v3.19-release'. So if you are working on a MR 
> with the goal of merging before release - its best to use this tag with the 
> MR.
> 
> And it would be good to avoid merging large changes at the last minute. And 
> not have merge requests stuck in need of reviews, testing and other necessary 
> tasks.
> 
> And I would think the testing/CI resources would get stressed in this 
> timeframe - so it would be good to use them judiciously if possible.
> 
> Thanks,
> Satish
> 
> 



[petsc-dev] petsc release plan for Mar/2023

2023-02-28 Thread Satish Balay via petsc-dev
Its time for another PETSc release - due end of March.

For this release [3.19], lets work with the following dates:

- feature freeze: March 28 say 5PM EST
- release: March 30 say 5PM EST

Merges after freeze should contain only fixes that would normally be acceptable 
to "release" work-flow.

I've created a new milestone 'v3.19-release'. So if you are working on a MR 
with the goal of merging before release - its best to use this tag with the MR.

And it would be good to avoid merging large changes at the last minute. And not 
have merge requests stuck in need of reviews, testing and other necessary tasks.

And I would think the testing/CI resources would get stressed in this timeframe 
- so it would be good to use them judiciously if possible.

Thanks,
Satish



Re: [petsc-dev] Apply for Google Summer of Code 2023?

2023-02-04 Thread Satish Balay via petsc-dev
BTW: ANL summer student application process is also in progress - and
it could be easier process [for Junchao] than google to get a student

[If I remember correctly - there is a category where students are at no
cost to the project]

Satish


On Fri, 3 Feb 2023, Junchao Zhang wrote:

> On Fri, Feb 3, 2023 at 1:31 PM Karl Rupp  wrote:
> 
> > Dear PETSc developers,
> >
> > in order to attract students to PETSc development, I'm thinking about a
> > PETSc application for Google Summer of Code (GSoC) 2023:
> >   https://summerofcode.withgoogle.com/programs/2023
> >
> > The org application deadline is February 7, i.e. in 4 days. This
> > application is - roughly speaking - a form with a state of intent and a
> > justification why the project is a good fit for GSoC. I've done this in
> > the past (~2010-12) and can do the paperwork again this year.
> >
> > What is required:
> >   - PETSc developers, who are willing to act as mentors throughout the
> 
> Hi, Karl, I am happy to act as a mentor
> 
> 
> >
> > program.
> >   - A few good project ideas (e.g. MATDENSE for GPUs) for
> > contributors/students to work on
> >
> * make I, J in AIJ able to have different types, i.e., I in 64-bit but J in
> 32-bit.
> * MATBAIJ/SBAIJ on GPUs
> * Support CUDA-12 (we do not now)
> 
> 
> >
> > It used to be that new organizations will get at most 2 contributor
> > slots assigned. That's fair, because one must not underestimate the
> > effort that goes into mentoring.
> >
> > Thoughts? Shall we apply (yes/no)? If yes, are you willing to be mentor?
> > The more mentors, the better; it underlines the importance of the
> > project and indicates that contributors will find a good environment.
> >
> > Thanks and best regards,
> > Karli
> >
> 



Re: [petsc-dev] PETSc 3.18.1 undefined reference

2022-10-31 Thread Satish Balay via petsc-dev
Both my builds below [4.2.0, 4.3.0] are with --download-cgns - yet they have 
different symbols..

perhaps there are more differences between these 2 versions [than just Seq vs 
MPI builds]

Satish

---
  args.append('-DCGNS_ENABLE_PARALLEL:BOOL=ON')
  args.append('-DHDF5_NEED_MPI:BOOL=ON')


On Mon, 31 Oct 2022, Jed Brown wrote:

> Indeed, if you configure PETSc with CGNS, you must provide an MPI-enabled 
> CGNS. I'm sad that they chose to package this way. Do you think PETSc should 
> do something different other than documenting this?
> 
> "Antonio T. sagitter"  writes:
> 
> > 'cgp_close()' is in MPI CGNS only.
> >
> > Thank you
> >
> >  >Looks like cgp_close() is a cgns-4.3.0 feature.
> >  >
> >  >Satish
> >  >
> >  >---
> >  >
> >  >4.2.0:
> >  >nm -Ao libcgns.so |grep close |grep ' T '
> >  >libcgns.so:000ae832 T ADFI_close_file
> >  >libcgns.so:00061492 T cg_close
> >  >libcgns.so:0005ece4 T cgio_close_file
> >  >
> >  >4.3.0:
> >  >nm -Ao libcgns.so |grep close |grep ' T '
> >  >libcgns.so:000b9df1 T ADFI_close_file
> >  >libcgns.so:0006c2ab T cg_close
> >  >libcgns.so:00068ed8 T cgio_close_file
> >  >libcgns.so:000cfd65 T cgp_close
> >
> >
> > -- 
> > ---
> > Antonio Trande
> > Fedora Project
> > mailto: sagit...@fedoraproject.org
> > GPG key: 0x40FDA7B70789A9CD
> > GPG key server: https://keyserver1.pgp.com/
> 



Re: [petsc-dev] PETSc 3.18.1 undefined reference

2022-10-29 Thread Satish Balay via petsc-dev
Looks like cgp_close() is a cgns-4.3.0 feature.

Satish

---

4.2.0:
nm -Ao libcgns.so |grep close |grep ' T '
libcgns.so:000ae832 T ADFI_close_file
libcgns.so:00061492 T cg_close
libcgns.so:0005ece4 T cgio_close_file

4.3.0:
nm -Ao libcgns.so |grep close |grep ' T '
libcgns.so:000b9df1 T ADFI_close_file
libcgns.so:0006c2ab T cg_close
libcgns.so:00068ed8 T cgio_close_file
libcgns.so:000cfd65 T cgp_close



On Sat, 29 Oct 2022, Antonio T. sagitter wrote:

> Hi all.
> 
> In PETSc 3.18.1 on Fedora 36, configure is failing for undefined reference to
> CGNS libraries:
> 
> Possible ERROR while running linker: exit code 1
> stderr:
> /usr/bin/ld: /tmp/ccvhtRGQ.ltrans0.ltrans.o: in function `main':
> /tmp/petsc-o02hjls3/config.libraries/conftest.c:5: undefined reference to
> `cgp_close'
> collect2: error: ld returned 1 exit status
> Linker output before filtering:
> 
> /usr/bin/ld: /tmp/ccvhtRGQ.ltrans0.ltrans.o: in function `main':
> /tmp/petsc-o02hjls3/config.libraries/conftest.c:5: undefined reference to
> `cgp_close'
> collect2: error: ld returned 1 exit status
> :
> Linker output after filtering:
> /usr/bin/ld: /tmp/ccvhtRGQ.ltrans0.ltrans.o: in function `main':
> /tmp/petsc-o02hjls3/config.libraries/conftest.c:5: undefined reference to
> `cgp_close'
> collect2: error: ld returned 1 exit status:
>  Configure header /tmp/petsc-o02hjls3/confdefs.h 
> 
> We are using CGNS-4.2.0
> 
> Regards.
> 



[petsc-dev] petsc-3.18.1 now available

2022-10-26 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.18.1 is now available for download.

https://petsc.org/release/install/download/

Satish




Re: [petsc-dev] petsc4py, numpy's BLAS and PETSc's BLAS

2022-10-24 Thread Satish Balay via petsc-dev
Hm - I see numpy on older OS - but not on M1. So Apple no longer bundles it?

And pip creates grief on NFS :(

Satish

--

balay@ypro ~ % sw_vers 
ProductName:Mac OS X
ProductVersion: 10.15.7
BuildVersion:   19H2026
balay@ypro ~ % python3 -c "import numpy; print(numpy.__file__)"
/Users/balay/Library/Python/3.8/lib/python/site-packages/numpy/__init__.py
balay@ypro ~ % 


compute-macos-240-02:~ balay$ sw_vers 
ProductName:macOS
ProductVersion: 12.3.1
BuildVersion:   21E258
compute-macos-240-02:~ balay$ python3 -c "import numpy; print(numpy.__file__)"
Traceback (most recent call last):
  File "", line 1, in 
ModuleNotFoundError: No module named 'numpy'
compute-macos-240-02:~ balay$ 




On Mon, 24 Oct 2022, Zhang, Hong via petsc-dev wrote:

> The chances of these problems are very slim because almost nobody builds 
> Numpy from source. I usually install it with pip. Pip-installed Numpy on Mac 
> uses Openblas, which is shipped together with the numpy wheels. The official 
> API to check which BLAS is used by Numpy is numpy.show_config(). However, it 
> gives me false info on my laptop — the openblas libs do not really exist in 
> /usr.local/lib.
> 
> openblas64__info:
> libraries = ['openblas64_', 'openblas64_']
> library_dirs = ['/usr/local/lib']
> language = c
> define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), 
> ('HAVE_BLAS_ILP64', None)]
> runtime_library_dirs = ['/usr/local/lib']
> blas_ilp64_opt_info:
> libraries = ['openblas64_', 'openblas64_']
> library_dirs = ['/usr/local/lib']
> language = c
> define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), 
> ('HAVE_BLAS_ILP64', None)]
> runtime_library_dirs = ['/usr/local/lib']
> openblas64__lapack_info:
> libraries = ['openblas64_', 'openblas64_']
> library_dirs = ['/usr/local/lib']
> language = c
> define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), 
> ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
> runtime_library_dirs = ['/usr/local/lib']
> lapack_ilp64_opt_info:
> libraries = ['openblas64_', 'openblas64_']
> library_dirs = ['/usr/local/lib']
> language = c
> define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), 
> ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
> runtime_library_dirs = ['/usr/local/lib']
> Supported SIMD extensions in this NumPy install:
> baseline = SSE,SSE2,SSE3
> found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
> not found = 
> AVX512F,AVX512CD,AVX512_KNL,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL
> 
> I think Numpy is actually using the following openblas lib:
> /usr/local/lib/python3.10/site-packages/numpy//.dylibs/libopenblas64_.0.dylib
> 
> I feel that it would be a big hassle if we want to determine the BLAS that 
> Numpy is using, considering the different ways and platforms Numpy may be 
> installed.
> 
> Hong (Mr.)
> 
> On Oct 21, 2022, at 4:20 PM, Barry Smith 
> mailto:bsm...@petsc.dev>> wrote:
> 
> 
>  When PETSc is built with petsc4py this brings along, in some way, the 
> BLAS/LAPACK that numpy is using. Yet PETSc is free to bring in its own 
> BLAS/LAPACK libraries.
> 
>  To be completely proper should we be having configure (when used with 
> petsc4py) determine the BLAS/LAPACK that numpy is using and only using that 
> for PETSc's BLAS/LAPACK needs?  If not, why is ok to have both sets hanging 
> around? Jose's new https://gitlab.com/petsc/petsc/-/merge_requests/5737 seems 
> to indicate possible problems with having both.
> 
>  Barry
> 
> 


Re: [petsc-dev] petsc4py, numpy's BLAS and PETSc's BLAS

2022-10-24 Thread Satish Balay via petsc-dev
Yes - this always bothered me... 

But I don't think its always possible to automate it.

The likable version [.so] might not exist - only .so.ver might exist? [and it 
might use blas but not lapack?]

Note: one way to avoid this issue is to let spack install 
python,numpy,petsc,petsc4py

Satish

---
$ locate numpy |grep \.so$ |grep python3.11
/usr/lib64/python3.11/site-packages/numpy/core/_multiarray_tests.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/core/_multiarray_umath.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/core/_operand_flag_tests.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/core/_rational_tests.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/core/_simd.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/core/_struct_ufunc_tests.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/core/_umath_tests.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/fft/_pocketfft_internal.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/linalg/_umath_linalg.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/linalg/lapack_lite.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_bounded_integers.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_common.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_generator.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_mt19937.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_pcg64.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_philox.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/_sfc64.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/bit_generator.cpython-311-x86_64-linux-gnu.so
/usr/lib64/python3.11/site-packages/numpy/random/mtrand.cpython-311-x86_64-linux-gnu.so
$ locate numpy |grep \.so$ |grep python3.11 |xargs ldd |grep = | cut -d = -f 2 
| cut -d " " -f 2 | sort | uniq
/lib64/libc.so.6
/lib64/libflexiblas.so.3
/lib64/libgcc_s.so.1
/lib64/libgfortran.so.5
/lib64/libm.so.6
/lib64/libquadmath.so.0


On Fri, 21 Oct 2022, Barry Smith wrote:

> 
>   When PETSc is built with petsc4py this brings along, in some way, the 
> BLAS/LAPACK that numpy is using. Yet PETSc is free to bring in its own 
> BLAS/LAPACK libraries. 
> 
>   To be completely proper should we be having configure (when used with 
> petsc4py) determine the BLAS/LAPACK that numpy is using and only using that 
> for PETSc's BLAS/LAPACK needs?  If not, why is ok to have both sets hanging 
> around? Jose's new https://gitlab.com/petsc/petsc/-/merge_requests/5737 seems 
> to indicate possible problems with having both.
> 
>   Barry



Re: [petsc-dev] Manualpage TOC

2022-10-10 Thread Satish Balay via petsc-dev
https://gitlab.com/petsc/petsc/-/merge_requests/5724/diffs

>>>

* [API Changes in each release](../changes/index.rst)
* [MPI](http://www.mpich.org/static/docs/latest/)
* [Vector Operations (Vec)](Vec/index.md)


etc was removed. I assumed it wasn't easy to migrate that to  rst format.

prev: 
https://petsc.gitlab.io/-/petsc/-/jobs/3147219016/artifacts/public/html/docs/manualpages/index.html
current: https://petsc.org/release/docs/manualpages/

Satish


On Mon, 10 Oct 2022, Barry Smith wrote:

> 
>   What is worse about it? What did it look like before? How would you like it 
> to look? 
> 
>The left hand side on this page is broken and I do not know how to fix it. 
> But the middle panel can be changed to whatever is better
> 
> 
> 
> > On Oct 10, 2022, at 8:47 AM, Matthew Knepley  wrote:
> > 
> > The new push got rid of the top level organization on this page
> > 
> >   https://petsc.org/main/docs/manualpages/ 
> > 
> > 
> > To me, this looks much worse. Is there any way to restore it
> > without reversing the speed gains?
> > 
> >Matt
> > 
> > -- 
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> > 
> > https://www.cse.buffalo.edu/~knepley/ 
> 
> 



Re: [petsc-dev] Symbol names using clang in addition to gcc

2022-09-22 Thread Satish Balay via petsc-dev
MR with this fix at https://gitlab.com/petsc/petsc/-/merge_requests/5672

Satish

On Thu, 22 Sep 2022, Satish Balay via petsc-dev wrote:

> Perhaps the following change.
> 
> Satish
> ---
> 
> diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
> index fc488603167..63eba0d2fb3 100644
> --- a/src/sys/dll/dlimpl.c
> +++ b/src/sys/dll/dlimpl.c
> @@ -327,7 +327,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char 
> **name)
>PetscFunctionBegin;
>PetscValidPointer(name, 2);
>*name = NULL;
> -#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
> +#if defined(PETSC_HAVE_DLADDR) && !(defined(__cray__) && defined(__clang__))
>dlerror(); /* clear any previous error */
>{
>  Dl_info info;
> diff --git a/src/sys/objects/pinit.c b/src/sys/objects/pinit.c
> index 7694a0b496b..7b51f90d1d4 100644
> --- a/src/sys/objects/pinit.c
> +++ b/src/sys/objects/pinit.c
> @@ -829,7 +829,7 @@ PETSC_INTERN PetscErrorCode PetscInitialize_Common(const 
> char *prog, const char
>}
>  #endif
>  
> -#if PetscDefined(HAVE_DLSYM) && defined(__USE_GNU)
> +#if defined(PETSC_HAVE_DLADDR) && !(defined(__cray__) && defined(__clang__))
>/* These symbols are currently in the OpenMPI and MPICH libraries; they 
> may not always be, in that case the test will simply not detect the problem */
>PetscCheck(!dlsym(RTLD_DEFAULT, "ompi_mpi_init") || !dlsym(RTLD_DEFAULT, 
> "MPID_Abort"), PETSC_COMM_SELF, PETSC_ERR_MPI_LIB_INCOMP, "Application was 
> linked against both OpenMPI and MPICH based MPI libraries and will not run 
> correctly");
>  #endif
> 
> 
> On Thu, 22 Sep 2022, Satish Balay via petsc-dev wrote:
> 
> > Actually the failure on Cray is with clang.
> > 
> > So I'm not sure the best way to fix this is.
> > 
> > Satish
> > 
> > --
> > 
> > [balay@login2.crusher petsc]$ CC --version
> > Cray clang version 14.0.0  (c98838affc7b58fed2a72f164d77c35e1bc8772f)
> > Target: x86_64-unknown-linux-gnu
> > Thread model: posix
> > InstalledDir: /opt/cray/pe/cce/14.0.0/cce-clang/x86_64/share/../bin
> > [balay@login2.crusher petsc]$ git diff src/sys/dll/dlimpl.c |cat
> > diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
> > index fc488603167..c73edb99f90 100644
> > --- a/src/sys/dll/dlimpl.c
> > +++ b/src/sys/dll/dlimpl.c
> > @@ -327,7 +327,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char 
> > **name)
> >PetscFunctionBegin;
> >PetscValidPointer(name, 2);
> >*name = NULL;
> > -#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
> > +#if defined(PETSC_HAVE_DLADDR) && (defined(__USE_GNU) || 
> > defined(__clang__))
> >dlerror(); /* clear any previous error */
> >{
> >  Dl_info info;
> > [balay@login2.crusher petsc]$ make libs
> >   CC arch-olcf-crusher/obj/sys/dll/dlimpl.o
> > /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:333:5: error: use 
> > of undeclared identifier 'Dl_info'
> > Dl_info info;
> > ^
> > /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:335:16: warning: 
> > implicit declaration of function 'dladdr' is invalid in C99 
> > [-Wimplicit-function-declaration]
> > PetscCheck(dladdr(*(void **), ), PETSC_COMM_SELF, 
> > PETSC_ERR_LIB, "Failed to lookup symbol: %s", dlerror());
> >^
> > /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:335:41: error: use 
> > of undeclared identifier 'info'
> > PetscCheck(dladdr(*(void **), ), PETSC_COMM_SELF, 
> > PETSC_ERR_LIB, "Failed to lookup symbol: %s", dlerror());
> > ^
> > /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:337:35: error: use 
> > of undeclared identifier 'info'
> > PetscCall(PetscDemangleSymbol(info.dli_sname, name));
> >   ^
> > 1 warning and 3 errors generated.
> > make: *** [gmakefile:195: arch-olcf-crusher/obj/sys/dll/dlimpl.o] Error 1
> > 
> > 
> > On Thu, 22 Sep 2022, Satish Balay via petsc-dev wrote:
> > 
> > > I see this is change was done at 
> > > https://gitlab.com/petsc/petsc/-/merge_requests/5268
> > > 
> > > Likely due to errors with cray compilers.
> > > 
> > > So  I guess we could add in __clang__ as you suggest. Can you create an 
> > > MR with this change?
> > > 
> > > And probably the same fix for src/sys/objects/pinit.c ?
> > > 
> > > Satish
> > > 
> > > 

Re: [petsc-dev] Symbol names using clang in addition to gcc

2022-09-22 Thread Satish Balay via petsc-dev
Perhaps the following change.

Satish
---

diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
index fc488603167..63eba0d2fb3 100644
--- a/src/sys/dll/dlimpl.c
+++ b/src/sys/dll/dlimpl.c
@@ -327,7 +327,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char **name)
   PetscFunctionBegin;
   PetscValidPointer(name, 2);
   *name = NULL;
-#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
+#if defined(PETSC_HAVE_DLADDR) && !(defined(__cray__) && defined(__clang__))
   dlerror(); /* clear any previous error */
   {
 Dl_info info;
diff --git a/src/sys/objects/pinit.c b/src/sys/objects/pinit.c
index 7694a0b496b..7b51f90d1d4 100644
--- a/src/sys/objects/pinit.c
+++ b/src/sys/objects/pinit.c
@@ -829,7 +829,7 @@ PETSC_INTERN PetscErrorCode PetscInitialize_Common(const 
char *prog, const char
   }
 #endif
 
-#if PetscDefined(HAVE_DLSYM) && defined(__USE_GNU)
+#if defined(PETSC_HAVE_DLADDR) && !(defined(__cray__) && defined(__clang__))
   /* These symbols are currently in the OpenMPI and MPICH libraries; they may 
not always be, in that case the test will simply not detect the problem */
   PetscCheck(!dlsym(RTLD_DEFAULT, "ompi_mpi_init") || !dlsym(RTLD_DEFAULT, 
"MPID_Abort"), PETSC_COMM_SELF, PETSC_ERR_MPI_LIB_INCOMP, "Application was 
linked against both OpenMPI and MPICH based MPI libraries and will not run 
correctly");
 #endif


On Thu, 22 Sep 2022, Satish Balay via petsc-dev wrote:

> Actually the failure on Cray is with clang.
> 
> So I'm not sure the best way to fix this is.
> 
> Satish
> 
> --
> 
> [balay@login2.crusher petsc]$ CC --version
> Cray clang version 14.0.0  (c98838affc7b58fed2a72f164d77c35e1bc8772f)
> Target: x86_64-unknown-linux-gnu
> Thread model: posix
> InstalledDir: /opt/cray/pe/cce/14.0.0/cce-clang/x86_64/share/../bin
> [balay@login2.crusher petsc]$ git diff src/sys/dll/dlimpl.c |cat
> diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
> index fc488603167..c73edb99f90 100644
> --- a/src/sys/dll/dlimpl.c
> +++ b/src/sys/dll/dlimpl.c
> @@ -327,7 +327,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char 
> **name)
>PetscFunctionBegin;
>PetscValidPointer(name, 2);
>*name = NULL;
> -#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
> +#if defined(PETSC_HAVE_DLADDR) && (defined(__USE_GNU) || defined(__clang__))
>dlerror(); /* clear any previous error */
>{
>  Dl_info info;
> [balay@login2.crusher petsc]$ make libs
>   CC arch-olcf-crusher/obj/sys/dll/dlimpl.o
> /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:333:5: error: use of 
> undeclared identifier 'Dl_info'
> Dl_info info;
> ^
> /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:335:16: warning: 
> implicit declaration of function 'dladdr' is invalid in C99 
> [-Wimplicit-function-declaration]
> PetscCheck(dladdr(*(void **), ), PETSC_COMM_SELF, 
> PETSC_ERR_LIB, "Failed to lookup symbol: %s", dlerror());
>^
> /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:335:41: error: use 
> of undeclared identifier 'info'
> PetscCheck(dladdr(*(void **), ), PETSC_COMM_SELF, 
> PETSC_ERR_LIB, "Failed to lookup symbol: %s", dlerror());
> ^
> /autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:337:35: error: use 
> of undeclared identifier 'info'
>     PetscCall(PetscDemangleSymbol(info.dli_sname, name));
>   ^
> 1 warning and 3 errors generated.
> make: *** [gmakefile:195: arch-olcf-crusher/obj/sys/dll/dlimpl.o] Error 1
> 
> 
> On Thu, 22 Sep 2022, Satish Balay via petsc-dev wrote:
> 
> > I see this is change was done at 
> > https://gitlab.com/petsc/petsc/-/merge_requests/5268
> > 
> > Likely due to errors with cray compilers.
> > 
> > So  I guess we could add in __clang__ as you suggest. Can you create an MR 
> > with this change?
> > 
> > And probably the same fix for src/sys/objects/pinit.c ?
> > 
> > Satish
> > 
> > On Thu, 22 Sep 2022, Aagaard, Brad T via petsc-dev wrote:
> > 
> > > Satish,
> > > 
> > > I used to be able to get symbol names using clang (macOS) and this still 
> > > works, but I need to edit the defines in dlimpl.c because __USE_GNU is 
> > > not defined. Is there a reason why the current code is limited to 
> > > __USE_GNU and doesn’t allow broader use when it works?
> > > 
> > > Here is the change I made to my local version to allow symbol names.
> > > 
> > > diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
> > > index 5bd68aa5a33..ded4ce5adbb 100644
> > > --- a/src

Re: [petsc-dev] Symbol names using clang in addition to gcc

2022-09-22 Thread Satish Balay via petsc-dev
Actually the failure on Cray is with clang.

So I'm not sure the best way to fix this is.

Satish

--

[balay@login2.crusher petsc]$ CC --version
Cray clang version 14.0.0  (c98838affc7b58fed2a72f164d77c35e1bc8772f)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /opt/cray/pe/cce/14.0.0/cce-clang/x86_64/share/../bin
[balay@login2.crusher petsc]$ git diff src/sys/dll/dlimpl.c |cat
diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
index fc488603167..c73edb99f90 100644
--- a/src/sys/dll/dlimpl.c
+++ b/src/sys/dll/dlimpl.c
@@ -327,7 +327,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char **name)
   PetscFunctionBegin;
   PetscValidPointer(name, 2);
   *name = NULL;
-#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
+#if defined(PETSC_HAVE_DLADDR) && (defined(__USE_GNU) || defined(__clang__))
   dlerror(); /* clear any previous error */
   {
 Dl_info info;
[balay@login2.crusher petsc]$ make libs
  CC arch-olcf-crusher/obj/sys/dll/dlimpl.o
/autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:333:5: error: use of 
undeclared identifier 'Dl_info'
Dl_info info;
^
/autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:335:16: warning: 
implicit declaration of function 'dladdr' is invalid in C99 
[-Wimplicit-function-declaration]
PetscCheck(dladdr(*(void **), ), PETSC_COMM_SELF, PETSC_ERR_LIB, 
"Failed to lookup symbol: %s", dlerror());
   ^
/autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:335:41: error: use of 
undeclared identifier 'info'
PetscCheck(dladdr(*(void **), ), PETSC_COMM_SELF, PETSC_ERR_LIB, 
"Failed to lookup symbol: %s", dlerror());
^
/autofs/nccs-svm1_home1/balay/petsc/src/sys/dll/dlimpl.c:337:35: error: use of 
undeclared identifier 'info'
PetscCall(PetscDemangleSymbol(info.dli_sname, name));
  ^
1 warning and 3 errors generated.
make: *** [gmakefile:195: arch-olcf-crusher/obj/sys/dll/dlimpl.o] Error 1


On Thu, 22 Sep 2022, Satish Balay via petsc-dev wrote:

> I see this is change was done at 
> https://gitlab.com/petsc/petsc/-/merge_requests/5268
> 
> Likely due to errors with cray compilers.
> 
> So  I guess we could add in __clang__ as you suggest. Can you create an MR 
> with this change?
> 
> And probably the same fix for src/sys/objects/pinit.c ?
> 
> Satish
> 
> On Thu, 22 Sep 2022, Aagaard, Brad T via petsc-dev wrote:
> 
> > Satish,
> > 
> > I used to be able to get symbol names using clang (macOS) and this still 
> > works, but I need to edit the defines in dlimpl.c because __USE_GNU is not 
> > defined. Is there a reason why the current code is limited to __USE_GNU and 
> > doesn’t allow broader use when it works?
> > 
> > Here is the change I made to my local version to allow symbol names.
> > 
> > diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
> > index 5bd68aa5a33..ded4ce5adbb 100644
> > --- a/src/sys/dll/dlimpl.c
> > +++ b/src/sys/dll/dlimpl.c
> > @@ -323,7 +323,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char 
> > **name) {
> >PetscFunctionBegin;
> >PetscValidPointer(name, 2);
> >*name = NULL;
> > -#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
> > +#if defined(PETSC_HAVE_DLADDR) && (defined(__USE_GNU) || 
> > defined(__clang__))
> >dlerror(); /* clear any previous error */
> >{
> >  Dl_info info;
> > 
> > 
> > Thanks,
> > Brad
> > 
> > 
> 


Re: [petsc-dev] Symbol names using clang in addition to gcc

2022-09-22 Thread Satish Balay via petsc-dev
I see this is change was done at 
https://gitlab.com/petsc/petsc/-/merge_requests/5268

Likely due to errors with cray compilers.

So  I guess we could add in __clang__ as you suggest. Can you create an MR with 
this change?

And probably the same fix for src/sys/objects/pinit.c ?

Satish

On Thu, 22 Sep 2022, Aagaard, Brad T via petsc-dev wrote:

> Satish,
> 
> I used to be able to get symbol names using clang (macOS) and this still 
> works, but I need to edit the defines in dlimpl.c because __USE_GNU is not 
> defined. Is there a reason why the current code is limited to __USE_GNU and 
> doesn’t allow broader use when it works?
> 
> Here is the change I made to my local version to allow symbol names.
> 
> diff --git a/src/sys/dll/dlimpl.c b/src/sys/dll/dlimpl.c
> index 5bd68aa5a33..ded4ce5adbb 100644
> --- a/src/sys/dll/dlimpl.c
> +++ b/src/sys/dll/dlimpl.c
> @@ -323,7 +323,7 @@ PetscErrorCode PetscDLAddr(void (*func)(void), char 
> **name) {
>PetscFunctionBegin;
>PetscValidPointer(name, 2);
>*name = NULL;
> -#if defined(PETSC_HAVE_DLADDR) && defined(__USE_GNU)
> +#if defined(PETSC_HAVE_DLADDR) && (defined(__USE_GNU) || defined(__clang__))
>dlerror(); /* clear any previous error */
>{
>  Dl_info info;
> 
> 
> Thanks,
> Brad
> 
> 


Re: [petsc-dev] Fwd: Pending configuration of custom domain docs.petsc.org

2022-08-30 Thread Satish Balay via petsc-dev
Hm - we don't use readthedocs anymore.

And I see docs.petsc.org is  getting redirected to  https://petsc.org/release/

So I guess perhaps we don't need to update anything on readthedocs.

[perhaps Jed can confirm]

Satish

On Tue, 30 Aug 2022, Matthew Knepley wrote:

> Is someone looking at this?
> 
>   Thanks,
> 
>  Matt
> 
> -- Forwarded message -
> From: Read the Docs 
> Date: Mon, Aug 29, 2022 at 11:00 PM
> Subject: Pending configuration of custom domain docs.petsc.org
> To: 
> 
> 
> Hello,
> 
> The configuration of your custom domain docs.petsc.org
>  is pending.
> Make sure to follow the step from our documentation
>  to complete the
> process.
> 
> If you don't complete the configuration, we will stop trying to validate
> your domain in 3 weeks, 1 day.
> Keep documenting,
> Read the Docs
> Read the Docs
> https://readthedocs.org
> 
> 
> 



Re: [petsc-dev] Type mismatch warnings

2022-08-19 Thread Satish Balay via petsc-dev
There is also  -Wno-lto-type-mismatch - but don't know if its for this issue or 
a different one.

Satish

On Fri, 19 Aug 2022, Blaise Bourdin wrote:

> It prints this warning instead of throwing an error. I should have mentioned 
> that this is on a M1 Mac, since gfortran behaviour and flags is so 
> platform-dependent Blaise
> 
> 
>   On Aug 19, 2022, at 1:51 PM, Satish Balay  wrote:
> 
> Does -fallow-argument-mismatch work?
> 
> Satish
> 
> On Fri, 19 Aug 2022, Blaise Bourdin wrote:
> 
>   Hi,
> 
>   Does anybody know if there is a magic gfortran flag to get rid of type 
> mismatch warnings?
>   These pop up when using PetscObjectSetName with two different petsc 
> objects, for instance.
> 
>   Warning: Type mismatch between actual argument at (1) and actual 
> argument at (2) (TYPE(tdm)/TYPE(tpetscsf)).
> 
>   Regards,
>   Blaise
> 
>   —
>   Tier 1 Canada Research Chair in Mathematical and Computational Aspects 
> of Solid Mechanics
>   Professor, Department of Mathematics & Statistics
>   Hamilton Hall room 409A, McMaster University
>   1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
>   https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 
> 
> — 
> Tier 1 Canada Research Chair in Mathematical and Computational Aspects of 
> Solid Mechanics
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 
> 


Re: [petsc-dev] Type mismatch warnings

2022-08-19 Thread Satish Balay via petsc-dev
Does -fallow-argument-mismatch work?

Satish

On Fri, 19 Aug 2022, Blaise Bourdin wrote:

> Hi,
> 
> Does anybody know if there is a magic gfortran flag to get rid of type 
> mismatch warnings?
> These pop up when using PetscObjectSetName with two different petsc objects, 
> for instance.
> 
>   Warning: Type mismatch between actual argument at (1) and actual 
> argument at (2) (TYPE(tdm)/TYPE(tpetscsf)).
> 
> Regards,
> Blaise
> 
> — 
> Tier 1 Canada Research Chair in Mathematical and Computational Aspects of 
> Solid Mechanics
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 


Re: [petsc-dev] tests coverage

2022-08-12 Thread Satish Balay via petsc-dev
I think there is some logic there that marks only the lines from the MR diff. 
Barry might remember this correctly.

Satish

On Fri, 12 Aug 2022, Blaise Bourdin wrote:

> thanks Satish,
> 
> That doesn’t seem quite right, though. For instance, I see that 
> https://petsc.gitlab.io/-/petsc/-/jobs/2841119765/artifacts/arch-ci-analyze-pipeline/index_gcov.html
>  reports only 30 testable lines in
> plexexodusii.c, none of which are tested, while I know that there are much 
> more than that, and that most of them are indeed tested.
> Regards,
> Blaise
> 
>   On Aug 12, 2022, at 11:37 AM, Satish Balay  wrote:
> 
> There is some coverage info - there are a couple of gcov tests - but
> that doesn't show coverage from all tests.
> 
> For ex: https://gitlab.com/petsc/petsc/-/merge_requests/5509
> 
> click on the last stage/job of the pipeline 'analyze-pipeline' i.e
> https://gitlab.com/petsc/petsc/-/jobs/2841119765
> 
> Here - browse 'artifacts' i.e
> https://gitlab.com/petsc/petsc/-/jobs/2841119765/artifacts/browse
> 
> Here go to 'arch-ci-analyze-pipeline' and click 'index_gcov.html' i.e:
> https://petsc.gitlab.io/-/petsc/-/jobs/2841119765/artifacts/arch-ci-analyze-pipeline/index_gcov.html
> 
> Satish
> 
> On Fri, 12 Aug 2022, Blaise Bourdin wrote:
> 
>   Hi,
> 
>   Is the source coverage analysis by the tests easily available? When 
> submitting a MR, I want to know if adding a test is necessary.
> 
>   Regards,
>   Blaise
> 
> 
>   —
>   Tier 1 Canada Research Chair in Mathematical and Computational Aspects 
> of Solid Mechanics
>   Professor, Department of Mathematics & Statistics
>   Hamilton Hall room 409A, McMaster University
>   1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
>   https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 
> 
> — 
> Tier 1 Canada Research Chair in Mathematical and Computational Aspects of 
> Solid Mechanics
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 
> 


Re: [petsc-dev] tests coverage

2022-08-12 Thread Satish Balay via petsc-dev
There is some coverage info - there are a couple of gcov tests - but
that doesn't show coverage from all tests.

For ex: https://gitlab.com/petsc/petsc/-/merge_requests/5509

click on the last stage/job of the pipeline 'analyze-pipeline' i.e
https://gitlab.com/petsc/petsc/-/jobs/2841119765

Here - browse 'artifacts' i.e
https://gitlab.com/petsc/petsc/-/jobs/2841119765/artifacts/browse

Here go to 'arch-ci-analyze-pipeline' and click 'index_gcov.html' i.e:
https://petsc.gitlab.io/-/petsc/-/jobs/2841119765/artifacts/arch-ci-analyze-pipeline/index_gcov.html

Satish

On Fri, 12 Aug 2022, Blaise Bourdin wrote:

> Hi,
> 
> Is the source coverage analysis by the tests easily available? When 
> submitting a MR, I want to know if adding a test is necessary.
> 
> Regards,
> Blaise
> 
> 
> — 
> Tier 1 Canada Research Chair in Mathematical and Computational Aspects of 
> Solid Mechanics
> Professor, Department of Mathematics & Statistics
> Hamilton Hall room 409A, McMaster University
> 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> 
> 


[petsc-dev] petsc-3.17.4 now available

2022-08-01 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.17.4 is now available for download.

http://www.mcs.anl.gov/petsc/download/index.html

Satish




Re: [petsc-dev] test failure in main

2022-07-29 Thread Satish Balay via petsc-dev
A fix is now merged to main. A *new* pipeline on the MR should work now.

Satish

On Fri, 29 Jul 2022, Barry Smith wrote:

> 
>   I just reported this on the testing-ci slack channel one second ago. Looks 
> like it was introduced with Matt's last merge.
> 
> 
> > On Jul 29, 2022, at 5:37 PM, Blaise Bourdin  wrote:
> > 
> > Hi,
> > 
> > I am trying to get a bunch of small MR through the test system without 
> > success. Right now, it looks like even the main branch does not pass the 
> > tests:
> > 
> > ./configure --CFLAGS='-Wimplicit-function-declaration -Wunused' 
> > --FFLAGS='-ffree-line-length-none -fallow-argument-mismatch -Wunused' 
> > --download-metis=1 --download-parmetis=1 --with-debugging=1 
> > --with-shared-libraries=1 --with-x11=1
> > 
> > SiMini:petsc-main (main)$ make -f gmakefile test 
> > search="dm_impls_plex_tutorials-ex8_3d_q1_periodic_project"
> > Using MAKEFLAGS: search=dm_impls_plex_tutorials-ex8_3d_q1_periodic_project
> >TEST 
> > monterey-gcc11.3-arm64-basic-g/tests/counts/dm_impls_plex_tutorials-ex8_3d_q1_periodic_project.counts
> > ok dm_impls_plex_tutorials-ex8_3d_q1_periodic_project
> > not ok diff-dm_impls_plex_tutorials-ex8_3d_q1_periodic_project # Error 
> > code: 1
> > #   10c10
> > #   <   marker: 1 strata with value/size (1 (36))
> > #   ---
> > #   >   marker: 1 strata with value/size (1 (48))
> > 
> > 
> > # FAILED diff-dm_impls_plex_tutorials-ex8_3d_q1_periodic_project
> > #
> > # To rerun failed tests: 
> > # /opt/homebrew/bin/gmake -f gmakefile test test-fail=1
> > 
> > am I the only one to see this?
> > 
> > Regards,
> > Blaise
> > 
> > 
> > — 
> > Tier 1 Canada Research Chair in Mathematical and Computational Aspects of 
> > Solid Mechanics
> > Professor, Department of Mathematics & Statistics
> > Hamilton Hall room 409A, McMaster University
> > 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
> > https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
> > 
> 


Re: [petsc-dev] [Minor issue] Lag during PetscInitialize on Fedora 36 (never seen before with other OS)

2022-07-27 Thread Satish Balay via petsc-dev
On Wed, 27 Jul 2022, Satish Balay via petsc-dev wrote:

> Maybe should try without hwloc...

So this helps! i.e build with:

$ ./configure --with-debugging=0 --download-mpich --with-hwloc=0 && make

>>>
balay@p1 /home/balay/petsc/src/ksp/ksp/tests (main =)
$ sleep 30; time ./ex39; time ./ex39

real0m0.029s
user0m0.020s
sys 0m0.009s

real0m0.027s
user0m0.018s
sys 0m0.008s
<<<<

Satish



Re: [petsc-dev] [Minor issue] Lag during PetscInitialize on Fedora 36 (never seen before with other OS)

2022-07-27 Thread Satish Balay via petsc-dev
"sleep 30; strace ./ex39" shows an extra pause while reading the following file 
(on my laptop):


balay@p1 /home/balay
$ sleep 10; time cat /sys/bus/pci/devices/:00:01.0/config; time cat 
/sys/bus/pci/devices/:00:01.0/config
�00���
real0m1.271s
user0m0.000s
sys 0m0.008s
�00���
real0m0.003s
user0m0.001s
sys 0m0.002s

balay@p1 /home/balay
$ lspci -v -s :00:01.0
00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe 
Controller (x16) (rev 0d) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 122, IOMMU group 1
Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
I/O behind bridge: 3000-3fff [size=4K]
Memory behind bridge: ed00-ee0f [size=17M]
Prefetchable memory behind bridge: c000-d1ff 
[size=288M]
Capabilities: 
Kernel driver in use: pcieport
<<<

Maybe should try without hwloc...

Satish


On Wed, 27 Jul 2022, LEDAC Pierre wrote:

> 
> Ok thanks all, so with Satish test, we know it is MPI_Init call from 
> PetscInitialize.
> 
> Probably, as Matt said, something related to gethostbyname() ...
> 
> 
> I will do some other test, contact MPICH support and will report any 
> workaround,
> 
> 
> Thanks again for your reactivity !
> 
> 
> Pierre LEDAC
> Commissariat à l’énergie atomique et aux énergies alternatives
> Centre de SACLAY
> DES/ISAS/DM2S/STMF/LGLS
> Bâtiment 451 – point courrier n°43
> F-91191 Gif-sur-Yvette
> +33 1 69 08 04 03
> +33 6 83 42 05 79
> 
> De : Satish Balay 
> Envoyé : mercredi 27 juillet 2022 18:57:16
> À : LEDAC Pierre
> Cc : petsc-dev@mcs.anl.gov
> Objet : Re: [petsc-dev] [Minor issue] Lag during PetscInitialize on Fedora 36 
> (never seen before with other OS)
> 
> I'm seeing this behavior on a laptop - but not desktop [both F36] (with both 
> mpich and openmpi buids)
> 
>  laptop/mpich 
> balay@p1 /home/balay/petsc/src/ksp/ksp/tests (main =)
> $ sleep 30; time ./ex39; time ./ex39
> 
> real0m1.709s
> user0m0.079s
> sys 0m0.042s
> 
> real0m0.116s
> user0m0.077s
> sys 0m0.014s
> 
> 
> However --with-mpi=0 is fine
> 
> 
> balay@p1 /home/balay/petsc/src/ksp/ksp/tests (main =)
> $ sleep 30; time ./ex39; time ./ex39
> 
> real0m0.116s
> user0m0.108s
> sys 0m0.008s
> 
> real0m0.081s
> user0m0.072s
> sys 0m0.008s
> <
> 
> Here is the desktop run (with mpich)
> 
> >>>
> [balay@pj01 tests]$ sleep 30; time ./ex39; time ./ex39
> 
> real0m0.065s
> user0m0.033s
> sys 0m0.019s
> 
> real0m0.053s
> user0m0.029s
> sys 0m0.013s
> 
> 
> And --with-mpi=0
> 
> >>>
> [balay@pj01 tests]$ sleep 30; time ./ex39; time ./ex39
> 
> real0m0.087s
> user0m0.046s
> sys 0m0.015s
> 
> real0m0.024s
> user0m0.018s
> sys 0m0.006s
> <<<
> 
> But don't know why its behaving this way [on the laptop]
> 
> Satish
> 
> On Wed, 27 Jul 2022, LEDAC Pierre wrote:
> 
> > Hello,
> >
> >
> > Recently migrated from Fedora34 to Fedora36, using PETSc I have some lag 
> > during the PetscInitialize, which
> >
> > disappeared if I run again immediately the binary. But after few seconds, 
> > the lag happens again (see below).
> >
> >
> > I suspected MPICH 4.0.2 on Fedora36, but a small reproducer indicated the 
> > issue is not in MPI_Init but really during PetscInitialize
> >
> >
> > Just annoying during testing, not very important, but did someone already 
> > see this ?
> >
> >
> > Thanks,
> >
> >
> > portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> >
> > real0m3,593s
> > user0m0,872s
> > sys0m0,433s
> > portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> >
> > real0m0,310s
> > user0m0,884s
> > sys0m0,023s
> >
> > # Wait ~20 seconds then again:
> >
> > portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> >
> > real0m3,507s
> > user0m0,897s
> > sys0m0,025s
> > portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> >
> > real0m0,154s
> > user0m0,725s
> > sys0m0,015s
> > portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> >
> > real0m0,176s
> > user0m0,875s
> > sys0m0,021s
> >
> >
> >
> > Pierre LEDAC
> > Commissariat à l’énergie atomique et aux énergies alternatives
> > Centre de SACLAY
> > DES/ISAS/DM2S/STMF/LGLS
> > Bâtiment 451 – point courrier n°43
> > F-91191 Gif-sur-Yvette
> > +33 1 69 08 04 03
> > +33 6 83 42 05 79
> >
> 


Re: [petsc-dev] [Minor issue] Lag during PetscInitialize on Fedora 36 (never seen before with other OS)

2022-07-27 Thread Satish Balay via petsc-dev
I'm seeing this behavior on a laptop - but not desktop [both F36] (with both 
mpich and openmpi buids)

 laptop/mpich 
balay@p1 /home/balay/petsc/src/ksp/ksp/tests (main =)
$ sleep 30; time ./ex39; time ./ex39

real0m1.709s
user0m0.079s
sys 0m0.042s

real0m0.116s
user0m0.077s
sys 0m0.014s


However --with-mpi=0 is fine


balay@p1 /home/balay/petsc/src/ksp/ksp/tests (main =)
$ sleep 30; time ./ex39; time ./ex39

real0m0.116s
user0m0.108s
sys 0m0.008s

real0m0.081s
user0m0.072s
sys 0m0.008s
<

Here is the desktop run (with mpich)

>>>
[balay@pj01 tests]$ sleep 30; time ./ex39; time ./ex39

real0m0.065s
user0m0.033s
sys 0m0.019s

real0m0.053s
user0m0.029s
sys 0m0.013s


And --with-mpi=0

>>>
[balay@pj01 tests]$ sleep 30; time ./ex39; time ./ex39

real0m0.087s
user0m0.046s
sys 0m0.015s

real0m0.024s
user0m0.018s
sys 0m0.006s
<<<

But don't know why its behaving this way [on the laptop]

Satish

On Wed, 27 Jul 2022, LEDAC Pierre wrote:

> Hello,
> 
> 
> Recently migrated from Fedora34 to Fedora36, using PETSc I have some lag 
> during the PetscInitialize, which
> 
> disappeared if I run again immediately the binary. But after few seconds, the 
> lag happens again (see below).
> 
> 
> I suspected MPICH 4.0.2 on Fedora36, but a small reproducer indicated the 
> issue is not in MPI_Init but really during PetscInitialize
> 
> 
> Just annoying during testing, not very important, but did someone already see 
> this ?
> 
> 
> Thanks,
> 
> 
> portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> 
> real0m3,593s
> user0m0,872s
> sys0m0,433s
> portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> 
> real0m0,310s
> user0m0,884s
> sys0m0,023s
> 
> # Wait ~20 seconds then again:
> 
> portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> 
> real0m3,507s
> user0m0,897s
> sys0m0,025s
> portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> 
> real0m0,154s
> user0m0,725s
> sys0m0,015s
> portable: /volatile/ledacp/petsc/src/ksp/ksp/tests (main) > time ./ex39
> 
> real0m0,176s
> user0m0,875s
> sys0m0,021s
> 
> 
> 
> Pierre LEDAC
> Commissariat à l’énergie atomique et aux énergies alternatives
> Centre de SACLAY
> DES/ISAS/DM2S/STMF/LGLS
> Bâtiment 451 – point courrier n°43
> F-91191 Gif-sur-Yvette
> +33 1 69 08 04 03
> +33 6 83 42 05 79
> 


Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
Fande,

You can use the hash for this - i.e  477e44bbb558b1357d86363677accbb4bcdfaabc 
in moose builds/CI - and see if that works

Satish

On Tue, 19 Jul 2022, Satish Balay via petsc-dev wrote:

> Created MR with your patch to release 
> https://gitlab.com/petsc/petsc/-/merge_requests/5447
> 
> Can merge this into release-3.16 as-well.
> 
> Satish
> 
> On Tue, 19 Jul 2022, Satish Balay via petsc-dev wrote:
> 
> > Ok - this issue persists in main branch
> > 
> > reproducible [on M1 mac] with:
> > 
> > ./configure --with-mpi=0 --download-fblaslapack --with-debugging=no
> > 
> > Satish
> > 
> > On Tue, 19 Jul 2022, Barry Smith wrote:
> > 
> > > 
> > >Urgg, I was sure we saw it recently and "fixed" it but searches on my 
> > > machine and googling don't find any particular place where we fixed it so 
> > > I must be imagining things.
> > > 
> > >   Barry
> > > 
> > > 
> > > > On Jul 19, 2022, at 2:22 PM, Satish Balay  wrote:
> > > > 
> > > > Barry,
> > > > 
> > > > Which commit in 3.17 fixed this?
> > > > 
> > > > Fande,
> > > > 
> > > > If I add a patch to branch "release-3.16" - would that get used? [as 
> > > > there won't be any new 3.16 tarballs]
> > > > 
> > > > BTW: Any particular reason to use fblaslapack - instead of [default] 
> > > > veclib on Mac?
> > > > 
> > > > Satish
> > > > 
> > > > 
> > > > On Tue, 19 Jul 2022, Fande Kong wrote:
> > > > 
> > > >> Hi Barry,
> > > >> 
> > > >> It would be nice if we could get this patch to PETSc-3.16.
> > > >> 
> > > >> We will upgrade to PETSc-3.17 for sure but it will take a while
> > > >> 
> > > >> Fande
> > > >> 
> > > >> On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> > > >> 
> > > >>> 
> > > >>>  I think we have a fix for this in the 3.17 release. Perhaps Satish 
> > > >>> could
> > > >>> stick this line into the 3.16 release/tag for its users?
> > > >>> 
> > > >>>  Barry
> > > >>> 
> > > >>> 
> > > >>>> On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
> > > >>>> 
> > > >>>> Hi PETSc team,
> > > >>>> 
> > > >>>> We had trouble when compiling fblaslapack on Apple M1.  We could get
> > > >>> around this issue by using the following patch, but do not really know
> > > >>> whether or not it is the right way to fix this issue.
> > > >>>> 
> > > >>>> petsc % git diff
> > > >>>> 
> > > >>>> diff --git a/config/BuildSystem/config/framework.py
> > > >>> b/config/BuildSystem/config/framework.py
> > > >>>> index 5b210ebb58..0ce27ef06d 100644
> > > >>>> --- a/config/BuildSystem/config/framework.py
> > > >>>> +++ b/config/BuildSystem/config/framework.py
> > > >>>> @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> > > >>> script.LanguageProcessor):
> > > >>>>   lines = [s for s in lines if s.find(' was built for newer macOS
> > > >>> version') < 0]
> > > >>>>   lines = [s for s in lines if s.find(' was built for newer OSX
> > > >>> version') < 0]
> > > >>>>   lines = [s for s in lines if s.find(' stack subq instruction is
> > > >>> too different from dwarf stack size') < 0]
> > > >>>> +  lines = [s for s in lines if s.find('could not create compact
> > > >>> unwind') < 0]
> > > >>>>   # Nvidia linker
> > > >>>>   lines = [s for s in lines if s.find('nvhpc.ld contains output
> > > >>> sections') < 0]
> > > >>>>   if lines: output = '\n'.join(lines)
> > > >>>> 
> > > >>>> 
> > > >>>> Please check the log file to have more details of failing messages.
> > > >>>> 
> > > >>>> 
> > > >>>> Thanks,
> > > >>>> 
> > > >>>> Fande
> > > >>>> 
> > > >>> 
> > > >>> 
> > > >> 
> > > > 
> > > 
> > 
> 



Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
Created MR with your patch to release 
https://gitlab.com/petsc/petsc/-/merge_requests/5447

Can merge this into release-3.16 as-well.

Satish

On Tue, 19 Jul 2022, Satish Balay via petsc-dev wrote:

> Ok - this issue persists in main branch
> 
> reproducible [on M1 mac] with:
> 
> ./configure --with-mpi=0 --download-fblaslapack --with-debugging=no
> 
> Satish
> 
> On Tue, 19 Jul 2022, Barry Smith wrote:
> 
> > 
> >Urgg, I was sure we saw it recently and "fixed" it but searches on my 
> > machine and googling don't find any particular place where we fixed it so I 
> > must be imagining things.
> > 
> >   Barry
> > 
> > 
> > > On Jul 19, 2022, at 2:22 PM, Satish Balay  wrote:
> > > 
> > > Barry,
> > > 
> > > Which commit in 3.17 fixed this?
> > > 
> > > Fande,
> > > 
> > > If I add a patch to branch "release-3.16" - would that get used? [as 
> > > there won't be any new 3.16 tarballs]
> > > 
> > > BTW: Any particular reason to use fblaslapack - instead of [default] 
> > > veclib on Mac?
> > > 
> > > Satish
> > > 
> > > 
> > > On Tue, 19 Jul 2022, Fande Kong wrote:
> > > 
> > >> Hi Barry,
> > >> 
> > >> It would be nice if we could get this patch to PETSc-3.16.
> > >> 
> > >> We will upgrade to PETSc-3.17 for sure but it will take a while
> > >> 
> > >> Fande
> > >> 
> > >> On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> > >> 
> > >>> 
> > >>>  I think we have a fix for this in the 3.17 release. Perhaps Satish 
> > >>> could
> > >>> stick this line into the 3.16 release/tag for its users?
> > >>> 
> > >>>  Barry
> > >>> 
> > >>> 
> > >>>> On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
> > >>>> 
> > >>>> Hi PETSc team,
> > >>>> 
> > >>>> We had trouble when compiling fblaslapack on Apple M1.  We could get
> > >>> around this issue by using the following patch, but do not really know
> > >>> whether or not it is the right way to fix this issue.
> > >>>> 
> > >>>> petsc % git diff
> > >>>> 
> > >>>> diff --git a/config/BuildSystem/config/framework.py
> > >>> b/config/BuildSystem/config/framework.py
> > >>>> index 5b210ebb58..0ce27ef06d 100644
> > >>>> --- a/config/BuildSystem/config/framework.py
> > >>>> +++ b/config/BuildSystem/config/framework.py
> > >>>> @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> > >>> script.LanguageProcessor):
> > >>>>   lines = [s for s in lines if s.find(' was built for newer macOS
> > >>> version') < 0]
> > >>>>   lines = [s for s in lines if s.find(' was built for newer OSX
> > >>> version') < 0]
> > >>>>   lines = [s for s in lines if s.find(' stack subq instruction is
> > >>> too different from dwarf stack size') < 0]
> > >>>> +  lines = [s for s in lines if s.find('could not create compact
> > >>> unwind') < 0]
> > >>>>   # Nvidia linker
> > >>>>   lines = [s for s in lines if s.find('nvhpc.ld contains output
> > >>> sections') < 0]
> > >>>>   if lines: output = '\n'.join(lines)
> > >>>> 
> > >>>> 
> > >>>> Please check the log file to have more details of failing messages.
> > >>>> 
> > >>>> 
> > >>>> Thanks,
> > >>>> 
> > >>>> Fande
> > >>>> 
> > >>> 
> > >>> 
> > >> 
> > > 
> > 
> 



Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
Ok - this issue persists in main branch

reproducible [on M1 mac] with:

./configure --with-mpi=0 --download-fblaslapack --with-debugging=no

Satish

On Tue, 19 Jul 2022, Barry Smith wrote:

> 
>Urgg, I was sure we saw it recently and "fixed" it but searches on my 
> machine and googling don't find any particular place where we fixed it so I 
> must be imagining things.
> 
>   Barry
> 
> 
> > On Jul 19, 2022, at 2:22 PM, Satish Balay  wrote:
> > 
> > Barry,
> > 
> > Which commit in 3.17 fixed this?
> > 
> > Fande,
> > 
> > If I add a patch to branch "release-3.16" - would that get used? [as there 
> > won't be any new 3.16 tarballs]
> > 
> > BTW: Any particular reason to use fblaslapack - instead of [default] veclib 
> > on Mac?
> > 
> > Satish
> > 
> > 
> > On Tue, 19 Jul 2022, Fande Kong wrote:
> > 
> >> Hi Barry,
> >> 
> >> It would be nice if we could get this patch to PETSc-3.16.
> >> 
> >> We will upgrade to PETSc-3.17 for sure but it will take a while
> >> 
> >> Fande
> >> 
> >> On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> >> 
> >>> 
> >>>  I think we have a fix for this in the 3.17 release. Perhaps Satish could
> >>> stick this line into the 3.16 release/tag for its users?
> >>> 
> >>>  Barry
> >>> 
> >>> 
>  On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
>  
>  Hi PETSc team,
>  
>  We had trouble when compiling fblaslapack on Apple M1.  We could get
> >>> around this issue by using the following patch, but do not really know
> >>> whether or not it is the right way to fix this issue.
>  
>  petsc % git diff
>  
>  diff --git a/config/BuildSystem/config/framework.py
> >>> b/config/BuildSystem/config/framework.py
>  index 5b210ebb58..0ce27ef06d 100644
>  --- a/config/BuildSystem/config/framework.py
>  +++ b/config/BuildSystem/config/framework.py
>  @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> >>> script.LanguageProcessor):
>    lines = [s for s in lines if s.find(' was built for newer macOS
> >>> version') < 0]
>    lines = [s for s in lines if s.find(' was built for newer OSX
> >>> version') < 0]
>    lines = [s for s in lines if s.find(' stack subq instruction is
> >>> too different from dwarf stack size') < 0]
>  +  lines = [s for s in lines if s.find('could not create compact
> >>> unwind') < 0]
>    # Nvidia linker
>    lines = [s for s in lines if s.find('nvhpc.ld contains output
> >>> sections') < 0]
>    if lines: output = '\n'.join(lines)
>  
>  
>  Please check the log file to have more details of failing messages.
>  
>  
>  Thanks,
>  
>  Fande
>  
> >>> 
> >>> 
> >> 
> > 
> 



Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
On Tue, 19 Jul 2022, Fande Kong wrote:

> On Tue, Jul 19, 2022 at 12:22 PM Satish Balay  wrote:
> 
> > Barry,
> >
> > Which commit in 3.17 fixed this?
> >
> > Fande,
> >
> > If I add a patch to branch "release-3.16" - would that get used? [as there
> > won't be any new 3.16 tarballs]
> >
> 
> Yes, we can use it. We treat PETSc as a submodule in MOOSE. We can attach
> any hash if we want

Ok - If we can figure out the patch in 3.17 that fixes it - I can add it to 
release-3.16

Or we can create a branch release-3.16-moose - and add in whatever patches you 
want. [and you can use that commit/hash/branch in MOOSE/petsc submodule]

[I'm hesitating to add the attached patch to release-3.16 branch. If this patch 
is something that we would merge all the way to main - it would could add it in 
to the release-3.16 branch]

Satish

> 
> Thanks,
> 
> Fande
> 
> 
> >
> > BTW: Any particular reason to use fblaslapack - instead of [default]
> > veclib on Mac?
> >
> > Satish
> >
> >
> > On Tue, 19 Jul 2022, Fande Kong wrote:
> >
> > > Hi Barry,
> > >
> > > It would be nice if we could get this patch to PETSc-3.16.
> > >
> > > We will upgrade to PETSc-3.17 for sure but it will take a while
> > >
> > > Fande
> > >
> > > On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> > >
> > > >
> > > >   I think we have a fix for this in the 3.17 release. Perhaps Satish
> > could
> > > > stick this line into the 3.16 release/tag for its users?
> > > >
> > > >   Barry
> > > >
> > > >
> > > > > On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
> > > > >
> > > > > Hi PETSc team,
> > > > >
> > > > > We had trouble when compiling fblaslapack on Apple M1.  We could get
> > > > around this issue by using the following patch, but do not really know
> > > > whether or not it is the right way to fix this issue.
> > > > >
> > > > >  petsc % git diff
> > > > >
> > > > > diff --git a/config/BuildSystem/config/framework.py
> > > > b/config/BuildSystem/config/framework.py
> > > > > index 5b210ebb58..0ce27ef06d 100644
> > > > > --- a/config/BuildSystem/config/framework.py
> > > > > +++ b/config/BuildSystem/config/framework.py
> > > > > @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> > > > script.LanguageProcessor):
> > > > >lines = [s for s in lines if s.find(' was built for newer
> > macOS
> > > > version') < 0]
> > > > >lines = [s for s in lines if s.find(' was built for newer OSX
> > > > version') < 0]
> > > > >lines = [s for s in lines if s.find(' stack subq instruction
> > is
> > > > too different from dwarf stack size') < 0]
> > > > > +  lines = [s for s in lines if s.find('could not create compact
> > > > unwind') < 0]
> > > > ># Nvidia linker
> > > > >lines = [s for s in lines if s.find('nvhpc.ld contains output
> > > > sections') < 0]
> > > > >if lines: output = '\n'.join(lines)
> > > > >
> > > > >
> > > > > Please check the log file to have more details of failing messages.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Fande
> > > > > 
> > > >
> > > >
> > >
> >
> >
> 



Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
>>>
Strumpack requires the LAPACK routine dlapmr(), the current Lapack libraries 
['liblapack.a', 'libblas.a'] does not have it
Try using --download-fblaslapack=1 option 
<<<

Ok - so strumpack can't use vecLb

Satish

On Tue, 19 Jul 2022, Satish Balay via petsc-dev wrote:

> >>>>
> Executing: mpicc --version
> stdout:
> clang version 12.0.1
> Target: arm64-apple-darwin20.0.0
> Thread model: posix
> InstalledDir: /Users/kongf/mambaforge3/envs/moose-mpich/bin
> <<<<<<
> 
> Ah yes - Moose doesn't use xcode compilers [or libraries..]
> 
> Does download-openblas also fail?
> 
> Satish
> 
> On Tue, 19 Jul 2022, Satish Balay via petsc-dev wrote:
> 
> > Barry,
> > 
> > Which commit in 3.17 fixed this?
> > 
> > Fande,
> > 
> > If I add a patch to branch "release-3.16" - would that get used? [as there 
> > won't be any new 3.16 tarballs]
> > 
> > BTW: Any particular reason to use fblaslapack - instead of [default] veclib 
> > on Mac?
> > 
> > Satish
> > 
> > 
> > On Tue, 19 Jul 2022, Fande Kong wrote:
> > 
> > > Hi Barry,
> > > 
> > > It would be nice if we could get this patch to PETSc-3.16.
> > > 
> > > We will upgrade to PETSc-3.17 for sure but it will take a while
> > > 
> > > Fande
> > > 
> > > On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> > > 
> > > >
> > > >   I think we have a fix for this in the 3.17 release. Perhaps Satish 
> > > > could
> > > > stick this line into the 3.16 release/tag for its users?
> > > >
> > > >   Barry
> > > >
> > > >
> > > > > On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
> > > > >
> > > > > Hi PETSc team,
> > > > >
> > > > > We had trouble when compiling fblaslapack on Apple M1.  We could get
> > > > around this issue by using the following patch, but do not really know
> > > > whether or not it is the right way to fix this issue.
> > > > >
> > > > >  petsc % git diff
> > > > >
> > > > > diff --git a/config/BuildSystem/config/framework.py
> > > > b/config/BuildSystem/config/framework.py
> > > > > index 5b210ebb58..0ce27ef06d 100644
> > > > > --- a/config/BuildSystem/config/framework.py
> > > > > +++ b/config/BuildSystem/config/framework.py
> > > > > @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> > > > script.LanguageProcessor):
> > > > >lines = [s for s in lines if s.find(' was built for newer macOS
> > > > version') < 0]
> > > > >lines = [s for s in lines if s.find(' was built for newer OSX
> > > > version') < 0]
> > > > >lines = [s for s in lines if s.find(' stack subq instruction is
> > > > too different from dwarf stack size') < 0]
> > > > > +  lines = [s for s in lines if s.find('could not create compact
> > > > unwind') < 0]
> > > > ># Nvidia linker
> > > > >lines = [s for s in lines if s.find('nvhpc.ld contains output
> > > > sections') < 0]
> > > > >if lines: output = '\n'.join(lines)
> > > > >
> > > > >
> > > > > Please check the log file to have more details of failing messages.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Fande
> > > > > 
> > > >
> > > >
> > > 
> > 
> 



Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
>>>>
Executing: mpicc --version
stdout:
clang version 12.0.1
Target: arm64-apple-darwin20.0.0
Thread model: posix
InstalledDir: /Users/kongf/mambaforge3/envs/moose-mpich/bin
<<<<<<

Ah yes - Moose doesn't use xcode compilers [or libraries..]

Does download-openblas also fail?

Satish

On Tue, 19 Jul 2022, Satish Balay via petsc-dev wrote:

> Barry,
> 
> Which commit in 3.17 fixed this?
> 
> Fande,
> 
> If I add a patch to branch "release-3.16" - would that get used? [as there 
> won't be any new 3.16 tarballs]
> 
> BTW: Any particular reason to use fblaslapack - instead of [default] veclib 
> on Mac?
> 
> Satish
> 
> 
> On Tue, 19 Jul 2022, Fande Kong wrote:
> 
> > Hi Barry,
> > 
> > It would be nice if we could get this patch to PETSc-3.16.
> > 
> > We will upgrade to PETSc-3.17 for sure but it will take a while
> > 
> > Fande
> > 
> > On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> > 
> > >
> > >   I think we have a fix for this in the 3.17 release. Perhaps Satish could
> > > stick this line into the 3.16 release/tag for its users?
> > >
> > >   Barry
> > >
> > >
> > > > On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
> > > >
> > > > Hi PETSc team,
> > > >
> > > > We had trouble when compiling fblaslapack on Apple M1.  We could get
> > > around this issue by using the following patch, but do not really know
> > > whether or not it is the right way to fix this issue.
> > > >
> > > >  petsc % git diff
> > > >
> > > > diff --git a/config/BuildSystem/config/framework.py
> > > b/config/BuildSystem/config/framework.py
> > > > index 5b210ebb58..0ce27ef06d 100644
> > > > --- a/config/BuildSystem/config/framework.py
> > > > +++ b/config/BuildSystem/config/framework.py
> > > > @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> > > script.LanguageProcessor):
> > > >lines = [s for s in lines if s.find(' was built for newer macOS
> > > version') < 0]
> > > >lines = [s for s in lines if s.find(' was built for newer OSX
> > > version') < 0]
> > > >lines = [s for s in lines if s.find(' stack subq instruction is
> > > too different from dwarf stack size') < 0]
> > > > +  lines = [s for s in lines if s.find('could not create compact
> > > unwind') < 0]
> > > ># Nvidia linker
> > > >lines = [s for s in lines if s.find('nvhpc.ld contains output
> > > sections') < 0]
> > > >if lines: output = '\n'.join(lines)
> > > >
> > > >
> > > > Please check the log file to have more details of failing messages.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Fande
> > > > 
> > >
> > >
> > 
> 



Re: [petsc-dev] ld: warning: could not create compact unwind for _dgeev_: registers 27 and 28 not saved contiguously in frame

2022-07-19 Thread Satish Balay via petsc-dev
Barry,

Which commit in 3.17 fixed this?

Fande,

If I add a patch to branch "release-3.16" - would that get used? [as there 
won't be any new 3.16 tarballs]

BTW: Any particular reason to use fblaslapack - instead of [default] veclib on 
Mac?

Satish


On Tue, 19 Jul 2022, Fande Kong wrote:

> Hi Barry,
> 
> It would be nice if we could get this patch to PETSc-3.16.
> 
> We will upgrade to PETSc-3.17 for sure but it will take a while
> 
> Fande
> 
> On Tue, Jul 19, 2022 at 11:59 AM Barry Smith  wrote:
> 
> >
> >   I think we have a fix for this in the 3.17 release. Perhaps Satish could
> > stick this line into the 3.16 release/tag for its users?
> >
> >   Barry
> >
> >
> > > On Jul 19, 2022, at 1:48 PM, Fande Kong  wrote:
> > >
> > > Hi PETSc team,
> > >
> > > We had trouble when compiling fblaslapack on Apple M1.  We could get
> > around this issue by using the following patch, but do not really know
> > whether or not it is the right way to fix this issue.
> > >
> > >  petsc % git diff
> > >
> > > diff --git a/config/BuildSystem/config/framework.py
> > b/config/BuildSystem/config/framework.py
> > > index 5b210ebb58..0ce27ef06d 100644
> > > --- a/config/BuildSystem/config/framework.py
> > > +++ b/config/BuildSystem/config/framework.py
> > > @@ -554,6 +554,7 @@ class Framework(config.base.Configure,
> > script.LanguageProcessor):
> > >lines = [s for s in lines if s.find(' was built for newer macOS
> > version') < 0]
> > >lines = [s for s in lines if s.find(' was built for newer OSX
> > version') < 0]
> > >lines = [s for s in lines if s.find(' stack subq instruction is
> > too different from dwarf stack size') < 0]
> > > +  lines = [s for s in lines if s.find('could not create compact
> > unwind') < 0]
> > ># Nvidia linker
> > >lines = [s for s in lines if s.find('nvhpc.ld contains output
> > sections') < 0]
> > >if lines: output = '\n'.join(lines)
> > >
> > >
> > > Please check the log file to have more details of failing messages.
> > >
> > >
> > > Thanks,
> > >
> > > Fande
> > > 
> >
> >
> 



[petsc-dev] petsc-3.17.3 now available

2022-06-29 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.17.3 is now available for download.

http://www.mcs.anl.gov/petsc/download/index.html

Satish




[petsc-dev] petsc-3.17.2 now available

2022-06-03 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.17.2 is now available for download.

http://www.mcs.anl.gov/petsc/download/index.html

Satish




Re: [petsc-dev] have requests for MR review indicate time expected to complete

2022-05-25 Thread Satish Balay via petsc-dev
On Wed, 25 May 2022, Barry Smith wrote:

> 
> 
> > On May 25, 2022, at 12:06 PM, Satish Balay  wrote:
> > 
> > On Wed, 25 May 2022, Matthew Knepley wrote:
> > 
> >> On Wed, May 25, 2022 at 11:55 AM Barry Smith  wrote:
> >> 
> >>> 
> >>>  It would be nice if when people received MR review requests it indicated
> >>> if the review was trivial and could be done in a minute or two. Then maybe
> >>> quick ones could flow through the system faster.
> >>> 
> >>>  How do people 1) know first when they have things they should (or have
> >>> been asked to) review and 2) know the list of things they should review at
> >>> any point in time. I am interested in other people's work flows to see if 
> >>> I
> >>> should adjust mine.
> >> 
> >> 
> >> I get emails for review, but it would be nice if there was a dashboard at
> >> Gitlab where we could see what is pending for us.
> > 
> > https://gitlab.com/dashboard/merge_requests?scope=all=opened_username=knepley
> > 
> > You can easily access this via the options on the top-right (along with 
> > issues, todo-list)
> > 
> >> 
> >> Is there a way to indicate if a review is trivial or long?
> > 
> > Perhaps the author could add some text wrt what parts the reviewer should 
> > pay extra attention to.
> 
>   Yes but how would they provide the text in a way that is simple for the 
> reviewer to process quickly, some text in the intro screen box would only be 
> of limited use. 

In some sense the author can do the first round of review - i.e add in comments 
along the corresponding diff lines - requesting appropriate feedback (or 
furnishing more context) that can help reviewer look at those changes more 
closely. I've used this mode a few times..

Satish

> For example, if gitlab had a "Changes for you to look at" based on your code 
> ownership records then people could process exactly what they need to and not 
> have to wade through things that they want to ignore.
> 
> > 
> > However who would evaluate if the review is trivial or long? Even if the 
> > author thinks its trivial - the reviewer might evaluate it differently.
> 
>   Sure, but it would be a way for someone to avoid putting something off if 
> the notification stated clearly this will likely take you very little time. 
> If you look and see it will take a long time you can put it off.
> 
> > 
> > One related issue is - some MRs pack in way too many changes - where 
> > multiple MRs would be more appropriate. [and easier to review - as now some 
> > of them could be trivial - and others require more thought]
> 
>True. But since people don't always quickly process trivial ones (since 
> they don't know they are trivial) the trivial ones can hang around as long as 
> big ones and require just as much nagging of reviewers. So having a better 1) 
> indication of triviality 2) easy way to review just the parts you are 
> qualified for might lead to people doing a better job of separating MR.

Ultimately each developer uses a different process - anything to help push the 
review process forward is good - but I don't think there is a simple 
alternative to "communication to reviewer and action/response from the 
reviewer" to get things moving..

Satish

> 
> > 
> > Satish
> 



Re: [petsc-dev] have requests for MR review indicate time expected to complete

2022-05-25 Thread Satish Balay via petsc-dev
On Wed, 25 May 2022, Matthew Knepley wrote:

> On Wed, May 25, 2022 at 11:55 AM Barry Smith  wrote:
> 
> >
> >   It would be nice if when people received MR review requests it indicated
> > if the review was trivial and could be done in a minute or two. Then maybe
> > quick ones could flow through the system faster.
> >
> >   How do people 1) know first when they have things they should (or have
> > been asked to) review and 2) know the list of things they should review at
> > any point in time. I am interested in other people's work flows to see if I
> > should adjust mine.
> 
> 
> I get emails for review, but it would be nice if there was a dashboard at
> Gitlab where we could see what is pending for us.

https://gitlab.com/dashboard/merge_requests?scope=all=opened_username=knepley

You can easily access this via the options on the top-right (along with issues, 
todo-list)

> 
> Is there a way to indicate if a review is trivial or long?

Perhaps the author could add some text wrt what parts the reviewer should pay 
extra attention to.

However who would evaluate if the review is trivial or long? Even if the author 
thinks its trivial - the reviewer might evaluate it differently.

One related issue is - some MRs pack in way too many changes - where multiple 
MRs would be more appropriate. [and easier to review - as now some of them 
could be trivial - and others require more thought]

Satish


Re: [petsc-dev] Manual page improvements! (Docs MRs to main until PETSc 3.18 is released)

2022-05-04 Thread Satish Balay via petsc-dev
On Wed, 4 May 2022, Patrick Sanan wrote:

> Unlike most previous docs changes, this has only been done on the main
> branch. So, **until PETSc 3.18 is released, make documentation MRs to main**,
> unless fixing something particularly critical on the release branch, or
> making a change which you've tested won't cause (serious) merge conflicts
> when release is merged into main.

This is true for non-doc fixes as-well. [higher probability of merge conflicts 
and other issues]

One way to check for potential (serious) merge conflicts is:

- always make the needed change to main branch
- once the fix is complete - attempt a rebase onto release
git fetch && git rebase --onto origin/release origin/main
- If this rebase is clean - then it could be an MR to release [with an easy 
merge - back to main]

Satish


[petsc-dev] petsc-3.17.1 now available

2022-04-29 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.17.1 is now available for download.

http://www.mcs.anl.gov/petsc/download/index.html

Satish




Re: [petsc-dev] CHKERRQ vs PetscCall for Fortran? Which is the future?

2022-04-26 Thread Satish Balay via petsc-dev
Hm we reverted all fortran examples to use CHKERRQ(). [from PetscCall] so 
presumably CHKERRQ() is still the preferred interface from fortran?

Satish

On Tue, 26 Apr 2022, Jacob Faibussowitsch wrote:

> Hi Glenn,
> 
> `PetscCall()` is the future, apologies for the confusion.
> 
> `CHKERRQ()` was mistakenly deleted from the fortran include files but exists 
> for backwards-compatibility only.
> 
> Unlike “normal” changes we opted not to formally deprecate `CHKERRQ()` and 
> friends (complete with compiler warnings) since they are so widely used.
> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> 
> > On Apr 26, 2022, at 17:36, Hammond, Glenn E via petsc-dev 
> >  wrote:
> > 
> > PETSc,
> > 
> > I see that CHKERRQ is back in the Fortran interface after 3.17.1.  Will 
> > CHKERRQ be removed in the future?  I just wrote a script to refactor 
> > PFLOTRAN [CHKERRQ() -> PetscCall()], and I want to know which direction to 
> > head before asking everything to check in all their dev branches.  If 
> > CHKERRQ() is available with Fortran for the future, I will abandon the 
> > script and leave the devs alone.
> > 
> > Thanks,
> > 
> > Glenn
> 


Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
Perhaps there are already some political decisions made. Since PETSc is part of 
xsdk - we are to commited to contirbute to it.

But then when evaluating MRs - its hard to rememer how to enforce some of these 
commitments. [so for improving CI is the primary suggestion - and only at the 
xsdk level]


Also my comments so far are from xsdk point of view - and this thread started 
off from the burden petsc changes impose on regular users [and packages]  - so 
those issues
can't easily be addressed in any 'enforced DOE  political decision'

Satish

On Sun, 3 Apr 2022, Barry Smith wrote:

> 
>   I think this requires the political decision to be made on the order of the 
> finalization and rules to enforce the order and timings. In a distributed DOE 
> world this kind of political decision is tough. 
> 
>I think a system more like just-in-time-packaging has to emerge to give 
> more flexibility for the free-wheeling open source work. I have no idea how 
> one could achieve this. Some kind of "smart" versioning that doe not require 
> one to explicitly manage a bunch of "if package version is x do y else if 
> version is z do w."
> 
> 
> 
> > On Apr 3, 2022, at 1:29 PM, Satish Balay  wrote:
> > 
> > there is certainly frustration with changes.
> > 
> > And then there could be real issues. If similar major changes land in sept 
> > release [at the last minite] in any critical packages [that others packages 
> > don't quickly add it to their own sept release ] - that might break things 
> > in a way that xsdk release could not be delivered.
> > 
> > [and usually triggers hard to resolve discussion of who should address this 
> > failure - that changed package - or dependent packages. And even if patches 
> > are available - they might not get in - due to workflow isues and such].
> > 
> > Satish
> > 
> > On Sun, 3 Apr 2022, Barry Smith wrote:
> > 
> >> 
> >>  We have not updated Sundials because they developed an entirely new code 
> >> with new APIs, it is essentially a new package with tons of new 
> >> functionality. Had they been incrementally changing things over the years 
> >> we would have actually kept up with it; so this is not a good example of 
> >> how small API changes keep us from upgrading, not at all. It is just an 
> >> example of how it takes a lot of work to wrap a new large package like 
> >> Sundials 3 from scratch and someone must make a big effort to do it. Note: 
> >> I think Sundials was right to do a rewrite, their classic design was 
> >> preventing them from making dramatic additions to the old code by doing a 
> >> complete rewrite they could accomplish so much more.
> >> 
> >>  MOAB has not been updated presumably because there are no or very 
> >> unaggressive users.
> >> 
> >>  Side note: I understand the frustration and grumbling that takes place 
> >> when one has to deal with change, especially when from a perspective as an 
> >> outer-sider to a project the change may seem unnecessary, that frustration 
> >> and grumbling is normal, I do it all the time. But it should not dictate 
> >> policy.
> >> 
> >> 
> >> 
> >> 
> >> 
> >>> On Apr 3, 2022, at 12:58 PM, Satish Balay  wrote:
> >>> 
> >>> On Sun, 3 Apr 2022, Barry Smith wrote:
> >>> 
>  
>  
> > On Apr 3, 2022, at 12:24 PM, Satish Balay  wrote:
> > 
> >> If we had this attitude with the external packages PETSc uses we would 
> >> have to stop using most of the packages/*.py. 
> > 
> > Sure one can take extreme view on both sides. [no change, vs won't 
> > hesitate to change] - having a manageable (minimal) change is harder to 
> > do.
> > 
> > I would point out that most externalpackages don't change much and we 
> > benefit from it - hence we are able to support so may. Some packages 
> > had major changes - and we haven't upgraded to their new versions.
>  
>  What packages are these?  We should have a tool that runs through all 
>  the packages/xxx.py and determines the date of release of the version we 
>  are using and if there are any newer versions available. We could run 
>  this tool automatically a month before each PETSc release sending its 
>  output to petsc-dev to see what we should be updating. 
> >>> 
> >>> sundials, moab [,trilinos - ml was only recently updated] - that I can 
> >>> think off right now.
> >>> 
> >>> Satish
> >>> 
>  
>  Note also that some packages we don't update to, not because of API 
>  changes but because the new releases are broken in some way, this is 
>  life in the HPC world.
>  
>  
>  
>  
> > 
> > [i.e with the current state one can use them only if they completely 
> > buy into petsc ecosystem i.e use old version - but not any larger one - 
> > as in use newer features from them]
> > 
> > We did update to newer interfaces in some packages.
> > 
> > But these problems remain - and have to be dealt with - and sometimes 
> > the 

Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
Techically even a cycle can be dealt with.

- have a june release of all packages (with old version dependencies).
- have a sept release of all packages (with new versiondependencies) - but 
don't change API from june to sept

But practically - every packages has their own release cyle [and
issues] and perhaps a world view [the same way we think in PETSc centric mode] -

So far (within the xsdk side of things) - issues came up and were able
to deal with when discovered. But its not clear how we can improve on this

[witout some buy-in from individual packages to not beak things in a
major way. and adding testing to detect things early enough so there
is time to deal with them.]

Satish

On Sun, 3 Apr 2022, Barry Smith wrote:

> 
>   I agree this is an issue with xsdk; I don't have any good technical 
> solution for packages with cyclic dependencies; for one-way dependencies the 
> solution is easy as Jed pointed out, just sort the dependencies and make sure 
> that packages that depend on, for example, PETSc get final xsdk "releases" 
> after PETSc. Politically this may be hard, technically it is easy. This would 
> mean finalizing hypre and superlu_dist then finalizing PETSc then finalizing 
> dealii. 
> 
> 
> 
> > On Apr 3, 2022, at 1:15 PM, Satish Balay via petsc-dev 
> >  wrote:
> > 
> > This issue comes up in xsdk. most packages [superlu_dist, hypre, petsc, 
> > trilinos - and a bunch of others] attempt to make a release in sept [ECP 
> > milestone].
> > 
> > But then there are non-ecp packages that don't do that - and isues come up 
> > [for ex: dealii usually has an earlier release - that has a dependency on 
> > petsc.]
> > 
> > One of the tasks is to setup CI to detect this early enough. But then - 
> > there could be many breakages - and the first brakage [until its addressed] 
> > prevents one from noticing the next breakage]
> > 
> > https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145631
> > 
> > There are multiple challanges here - [apart from the moving target in each 
> > packages] - just identifying the correct target for the external package - 
> > i.e: will there be a new release of the external package-A before package-B 
> > - and what branch should one target (to test) for that new release etc..
> > 
> > And Its not clear if we can push some of this testing to individual package 
> > test suite [instead of completely at the xsdk level]. i.e what kind of 
> > testing in the PETSc CI cycle would help here?
> > 
> > Just having 1 moving package [petsc4py] in this CI cycle - triggered in a 
> > merge of petsc4py sources in petsc source tree. Not sure how what issues 
> > would come up if we have others.
> > 
> > I have a small subset of this in xsdk ci 
> > https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145624
> > 
> > $ nice ./bin/spack install --fail-fast -j24 slepc@main 
> > ^petsc@main+mpi+hypre+superlu-dist+metis+hdf5~mumps+double~int64 
> > ^netlib-lapack pflotran@develop 
> > ^petsc@main+mpi+hypre+superlu-dist+metis+hdf5~mumps+double~int64 
> > ^netlib-lapack
> > 
> > Satish
> > 
> > On Sun, 3 Apr 2022, Jed Brown wrote:
> > 
> >> Sundials, for example.
> >> 
> >> PETSc is still relatively low in the software stack. If everyone is making 
> >> biannual releases for ECP, then we'd need a topological sort on 
> >> dependencies and PETSc would need to release (or at least freeze) early, 
> >> e.g., January or February, so other packages have time to update by their 
> >> March deadlines.
> >> 
> >> I understand there may have been an assessment that PetscTryMethod sees 
> >> vanishing use by dependencies. I think if we're doing that, it should 
> >> probably be earlier in the release cycle with more effort to assess and 
> >> notify such dependencies. Maybe we could keep a list of high profile 
> >> packages, a script to grep them all, and a weekly(?) CI job that builds 
> >> them.
> >> 
> >> On Sun, Apr 3, 2022, at 10:45 AM, Barry Smith wrote:
> >>> 
> >>> 
> >>>> On Apr 3, 2022, at 12:24 PM, Satish Balay  wrote:
> >>>> 
> >>>>> If we had this attitude with the external packages PETSc uses we would 
> >>>>> have to stop using most of the packages/*.py. 
> >>>> 
> >>>> Sure one can take extreme view on both sides. [no change, vs won't 
> >>>> hesitate to change] - having a manageable (minimal) change is harder to 
> >>>> do.
> >>>> 
> >>>> I would point out th

Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
there is certainly frustration with changes.

And then there could be real issues. If similar major changes land in sept 
release [at the last minite] in any critical packages [that others packages 
don't quickly add it to their own sept release ] - that might break things in a 
way that xsdk release could not be delivered.

[and usually triggers hard to resolve discussion of who should address this 
failure - that changed package - or dependent packages. And even if patches are 
available - they might not get in - due to workflow isues and such].

Satish

On Sun, 3 Apr 2022, Barry Smith wrote:

> 
>   We have not updated Sundials because they developed an entirely new code 
> with new APIs, it is essentially a new package with tons of new 
> functionality. Had they been incrementally changing things over the years we 
> would have actually kept up with it; so this is not a good example of how 
> small API changes keep us from upgrading, not at all. It is just an example 
> of how it takes a lot of work to wrap a new large package like Sundials 3 
> from scratch and someone must make a big effort to do it. Note: I think 
> Sundials was right to do a rewrite, their classic design was preventing them 
> from making dramatic additions to the old code by doing a complete rewrite 
> they could accomplish so much more.
> 
>   MOAB has not been updated presumably because there are no or very 
> unaggressive users.
> 
>   Side note: I understand the frustration and grumbling that takes place when 
> one has to deal with change, especially when from a perspective as an 
> outer-sider to a project the change may seem unnecessary, that frustration 
> and grumbling is normal, I do it all the time. But it should not dictate 
> policy.
> 
> 
> 
>   
> 
> > On Apr 3, 2022, at 12:58 PM, Satish Balay  wrote:
> > 
> > On Sun, 3 Apr 2022, Barry Smith wrote:
> > 
> >> 
> >> 
> >>> On Apr 3, 2022, at 12:24 PM, Satish Balay  wrote:
> >>> 
>  If we had this attitude with the external packages PETSc uses we would 
>  have to stop using most of the packages/*.py. 
> >>> 
> >>> Sure one can take extreme view on both sides. [no change, vs won't 
> >>> hesitate to change] - having a manageable (minimal) change is harder to 
> >>> do.
> >>> 
> >>> I would point out that most externalpackages don't change much and we 
> >>> benefit from it - hence we are able to support so may. Some packages had 
> >>> major changes - and we haven't upgraded to their new versions.
> >> 
> >>  What packages are these?  We should have a tool that runs through all the 
> >> packages/xxx.py and determines the date of release of the version we are 
> >> using and if there are any newer versions available. We could run this 
> >> tool automatically a month before each PETSc release sending its output to 
> >> petsc-dev to see what we should be updating. 
> > 
> > sundials, moab [,trilinos - ml was only recently updated] - that I can 
> > think off right now.
> > 
> > Satish
> > 
> >> 
> >>  Note also that some packages we don't update to, not because of API 
> >> changes but because the new releases are broken in some way, this is life 
> >> in the HPC world.
> >> 
> >> 
> >> 
> >> 
> >>> 
> >>> [i.e with the current state one can use them only if they completely buy 
> >>> into petsc ecosystem i.e use old version - but not any larger one - as in 
> >>> use newer features from them]
> >>> 
> >>> We did update to newer interfaces in some packages.
> >>> 
> >>> But these problems remain - and have to be dealt with - and sometimes the 
> >>> complexity increases based on the dependency tree.
> >>> 
> >>> [and also results in folk using and requiring help with older  petsc 
> >>> versions]
> >>> 
> >>> Satish
> >>> 
> >>> 
> >>> On Sun, 3 Apr 2022, Barry Smith wrote:
> >>> 
>  
>  I would say it is not reasonable for the package developers in the xsdk 
>  ecosystem to expect that they can just continue to use another HPC 
>  package for multiple years without doing some minimal amount of work to 
>  keep up with the other packages' new releases. If we had this attitude 
>  with the external packages PETSc uses we would have to stop using most 
>  of the packages/*.py. Yes, it is a constant race to keep up the versions 
>  in packages/*.py and requires some effort but if you want to play in 
>  this game that is a race you have to remain in. And it goes way beyond 
>  HPC, to say you do software development but don't need to manage 
>  constant change in everything is an oxymoron. There was never a golden 
>  age of computing where things didn't change rapidly, pretending there 
>  was or can be is not productive. Of course, we want to minimize public 
>  change, but having a goal of no public change is not a realistic or even 
>  desirable goal.
>  
> > Just noticed - CHKERRQ() got removed from fortran interface - breaking 
> > pflotran
>  
>  This was just a oversight, 

Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
This issue comes up in xsdk. most packages [superlu_dist, hypre, petsc, 
trilinos - and a bunch of others] attempt to make a release in sept [ECP 
milestone].

But then there are non-ecp packages that don't do that - and isues come up [for 
ex: dealii usually has an earlier release - that has a dependency on petsc.]

One of the tasks is to setup CI to detect this early enough. But then - there 
could be many breakages - and the first brakage [until its addressed] prevents 
one from noticing the next breakage]

https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145631

There are multiple challanges here - [apart from the moving target in each 
packages] - just identifying the correct target for the external package - i.e: 
will there be a new release of the external package-A before package-B - and 
what branch should one target (to test) for that new release etc..

And Its not clear if we can push some of this testing to individual package 
test suite [instead of completely at the xsdk level]. i.e what kind of testing 
in the PETSc CI cycle would help here?

Just having 1 moving package [petsc4py] in this CI cycle - triggered in a merge 
of petsc4py sources in petsc source tree. Not sure how what issues would come 
up if we have others.

I have a small subset of this in xsdk ci 
https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145624

$ nice ./bin/spack install --fail-fast -j24 slepc@main 
^petsc@main+mpi+hypre+superlu-dist+metis+hdf5~mumps+double~int64 ^netlib-lapack 
pflotran@develop 
^petsc@main+mpi+hypre+superlu-dist+metis+hdf5~mumps+double~int64 ^netlib-lapack

Satish

On Sun, 3 Apr 2022, Jed Brown wrote:

> Sundials, for example.
> 
> PETSc is still relatively low in the software stack. If everyone is making 
> biannual releases for ECP, then we'd need a topological sort on dependencies 
> and PETSc would need to release (or at least freeze) early, e.g., January or 
> February, so other packages have time to update by their March deadlines.
> 
> I understand there may have been an assessment that PetscTryMethod sees 
> vanishing use by dependencies. I think if we're doing that, it should 
> probably be earlier in the release cycle with more effort to assess and 
> notify such dependencies. Maybe we could keep a list of high profile 
> packages, a script to grep them all, and a weekly(?) CI job that builds them.
> 
> On Sun, Apr 3, 2022, at 10:45 AM, Barry Smith wrote:
> > 
> > 
> > > On Apr 3, 2022, at 12:24 PM, Satish Balay  wrote:
> > > 
> > >> If we had this attitude with the external packages PETSc uses we would 
> > >> have to stop using most of the packages/*.py. 
> > > 
> > > Sure one can take extreme view on both sides. [no change, vs won't 
> > > hesitate to change] - having a manageable (minimal) change is harder to 
> > > do.
> > > 
> > > I would point out that most externalpackages don't change much and we 
> > > benefit from it - hence we are able to support so may. Some packages had 
> > > major changes - and we haven't upgraded to their new versions.
> > 
> >   What packages are these?  We should have a tool that runs through all the 
> > packages/xxx.py and determines the date of release of the version we are 
> > using and if there are any newer versions available. We could run this tool 
> > automatically a month before each PETSc release sending its output to 
> > petsc-dev to see what we should be updating. 
> > 
> >   Note also that some packages we don't update to, not because of API 
> > changes but because the new releases are broken in some way, this is life 
> > in the HPC world.
> > 
> > 
> > 
> > 
> > > 
> > > [i.e with the current state one can use them only if they completely buy 
> > > into petsc ecosystem i.e use old version - but not any larger one - as in 
> > > use newer features from them]
> > > 
> > > We did update to newer interfaces in some packages.
> > > 
> > > But these problems remain - and have to be dealt with - and sometimes the 
> > > complexity increases based on the dependency tree.
> > > 
> > > [and also results in folk using and requiring help with older  petsc 
> > > versions]
> > > 
> > > Satish
> > > 
> > > 
> > > On Sun, 3 Apr 2022, Barry Smith wrote:
> > > 
> > >> 
> > >>  I would say it is not reasonable for the package developers in the xsdk 
> > >> ecosystem to expect that they can just continue to use another HPC 
> > >> package for multiple years without doing some minimal amount of work to 
> > >> keep up with the other packages' new releases. If we had this attitude 
> > >> with the external packages PETSc uses we would have to stop using most 
> > >> of the packages/*.py. Yes, it is a constant race to keep up the versions 
> > >> in packages/*.py and requires some effort but if you want to play in 
> > >> this game that is a race you have to remain in. And it goes way beyond 
> > >> HPC, to say you do software development but don't need to manage 
> > >> constant change in everything is an oxymoron. There was never a golden 
> > 

Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
On Sun, 3 Apr 2022, Barry Smith wrote:

> 
> 
> > On Apr 3, 2022, at 12:24 PM, Satish Balay  wrote:
> > 
> >> If we had this attitude with the external packages PETSc uses we would 
> >> have to stop using most of the packages/*.py. 
> > 
> > Sure one can take extreme view on both sides. [no change, vs won't hesitate 
> > to change] - having a manageable (minimal) change is harder to do.
> > 
> > I would point out that most externalpackages don't change much and we 
> > benefit from it - hence we are able to support so may. Some packages had 
> > major changes - and we haven't upgraded to their new versions.
> 
>   What packages are these?  We should have a tool that runs through all the 
> packages/xxx.py and determines the date of release of the version we are 
> using and if there are any newer versions available. We could run this tool 
> automatically a month before each PETSc release sending its output to 
> petsc-dev to see what we should be updating. 

sundials, moab [,trilinos - ml was only recently updated] - that I can think 
off right now.

Satish

> 
>   Note also that some packages we don't update to, not because of API changes 
> but because the new releases are broken in some way, this is life in the HPC 
> world.
> 
> 
> 
> 
> > 
> > [i.e with the current state one can use them only if they completely buy 
> > into petsc ecosystem i.e use old version - but not any larger one - as in 
> > use newer features from them]
> > 
> > We did update to newer interfaces in some packages.
> > 
> > But these problems remain - and have to be dealt with - and sometimes the 
> > complexity increases based on the dependency tree.
> > 
> > [and also results in folk using and requiring help with older  petsc 
> > versions]
> > 
> > Satish
> > 
> > 
> > On Sun, 3 Apr 2022, Barry Smith wrote:
> > 
> >> 
> >>  I would say it is not reasonable for the package developers in the xsdk 
> >> ecosystem to expect that they can just continue to use another HPC package 
> >> for multiple years without doing some minimal amount of work to keep up 
> >> with the other packages' new releases. If we had this attitude with the 
> >> external packages PETSc uses we would have to stop using most of the 
> >> packages/*.py. Yes, it is a constant race to keep up the versions in 
> >> packages/*.py and requires some effort but if you want to play in this 
> >> game that is a race you have to remain in. And it goes way beyond HPC, to 
> >> say you do software development but don't need to manage constant change 
> >> in everything is an oxymoron. There was never a golden age of computing 
> >> where things didn't change rapidly, pretending there was or can be is not 
> >> productive. Of course, we want to minimize public change, but having a 
> >> goal of no public change is not a realistic or even desirable goal.
> >> 
> >>> Just noticed - CHKERRQ() got removed from fortran interface - breaking 
> >>> pflotran
> >> 
> >> This was just a oversight, easily fixed. 
> >> 
> >>> On Apr 3, 2022, at 11:13 AM, Satish Balay  wrote:
> >>> 
> >>> 
> >>> Note this  is not just 'users should update their code' issue.
> >>> - all packages (that use petsc) would need to do this update
> >>> - and this update doesn't always happen - so pakages will stay at old 
> >>> release - some might not
> >>> - so now we cant build PETSc with both these packages together.
> >>> 
> >>> this type of change causes major issues in xsdk ecosystem (depends on how 
> >>> many direct/indirect dependencies are on the given package)
> >>> 
> >>> Just noticed - CHKERRQ() got removed from fortran interface - breaking 
> >>> pflotran
> >>> 
> >>> https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145624
> >>> 
> >>> [also CHKERRABORT]. Perhaps they can be added back in.
> >>> 
> >>> $ git diff release-3.16..release include/petsc/finclude/petscsys.h
> >>> diff --git a/include/petsc/finclude/petscsys.h 
> >>> b/include/petsc/finclude/petscsys.h
> >>> 
> >>> #define SETERRABORT(c,ierr,s)  call PetscError(c,ierr,0,s); call 
> >>> MPI_Abort(c,ierr)
> >>> -#define CHKERRQ(ierr) if (ierr .ne. 0) then;call 
> >>> PetscErrorF(ierr);return;endif
> >>> +#define PetscCall(ierr) if (ierr .ne. 0) then;call 
> >>> PetscErrorF(ierr);return;endif
> >>> #define CHKERRA(ierr) if (ierr .ne. 0) then;call PetscErrorF(ierr);call 
> >>> MPIU_Abort(PETSC_COMM_SELF,ierr);endif
> >>> -#define CHKERRABORT(c,ierr) if (ierr .ne. 0) then;call 
> >>> PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
> >>> +#define PetscCallAbort(c,ierr) if (ierr .ne. 0) then;call 
> >>> PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
> >>> #define CHKMEMQ call chkmemfortran(__LINE__,__FILE__,ierr)
> >>> 
> >>> Satish
> >>> 
> >>> On Sun, 3 Apr 2022, Barry Smith wrote:
> >>> 
>  
>   To use the latest version of PETSc, each user needs to remove the error 
>  checks on these calls. The resulting code will work with previous 
>  versions of PETSc as well as the current version of PETSc.  PETSc has 
>  

Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
> If we had this attitude with the external packages PETSc uses we would have 
> to stop using most of the packages/*.py. 

Sure one can take extreme view on both sides. [no change, vs won't hesitate to 
change] - having a manageable (minimal) change is harder to do.

I would point out that most externalpackages don't change much and we benefit 
from it - hence we are able to support so may. Some packages had major changes 
- and we haven't upgraded to their new versions.

[i.e with the current state one can use them only if they completely buy into 
petsc ecosystem i.e use old version - but not any larger one - as in use newer 
features from them]

We did update to newer interfaces in some packages.

But these problems remain - and have to be dealt with - and sometimes the 
complexity increases based on the dependency tree.

[and also results in folk using and requiring help with older  petsc versions]

Satish


On Sun, 3 Apr 2022, Barry Smith wrote:

> 
>   I would say it is not reasonable for the package developers in the xsdk 
> ecosystem to expect that they can just continue to use another HPC package 
> for multiple years without doing some minimal amount of work to keep up with 
> the other packages' new releases. If we had this attitude with the external 
> packages PETSc uses we would have to stop using most of the packages/*.py. 
> Yes, it is a constant race to keep up the versions in packages/*.py and 
> requires some effort but if you want to play in this game that is a race you 
> have to remain in. And it goes way beyond HPC, to say you do software 
> development but don't need to manage constant change in everything is an 
> oxymoron. There was never a golden age of computing where things didn't 
> change rapidly, pretending there was or can be is not productive. Of course, 
> we want to minimize public change, but having a goal of no public change is 
> not a realistic or even desirable goal.
> 
> > Just noticed - CHKERRQ() got removed from fortran interface - breaking 
> > pflotran
> 
> This was just a oversight, easily fixed. 
> 
> > On Apr 3, 2022, at 11:13 AM, Satish Balay  wrote:
> > 
> > 
> > Note this  is not just 'users should update their code' issue.
> > - all packages (that use petsc) would need to do this update
> > - and this update doesn't always happen - so pakages will stay at old 
> > release - some might not
> > - so now we cant build PETSc with both these packages together.
> > 
> > this type of change causes major issues in xsdk ecosystem (depends on how 
> > many direct/indirect dependencies are on the given package)
> > 
> > Just noticed - CHKERRQ() got removed from fortran interface - breaking 
> > pflotran
> > 
> > https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145624
> > 
> > [also CHKERRABORT]. Perhaps they can be added back in.
> > 
> > $ git diff release-3.16..release include/petsc/finclude/petscsys.h
> > diff --git a/include/petsc/finclude/petscsys.h 
> > b/include/petsc/finclude/petscsys.h
> > 
> > #define SETERRABORT(c,ierr,s)  call PetscError(c,ierr,0,s); call 
> > MPI_Abort(c,ierr)
> > -#define CHKERRQ(ierr) if (ierr .ne. 0) then;call 
> > PetscErrorF(ierr);return;endif
> > +#define PetscCall(ierr) if (ierr .ne. 0) then;call 
> > PetscErrorF(ierr);return;endif
> > #define CHKERRA(ierr) if (ierr .ne. 0) then;call PetscErrorF(ierr);call 
> > MPIU_Abort(PETSC_COMM_SELF,ierr);endif
> > -#define CHKERRABORT(c,ierr) if (ierr .ne. 0) then;call 
> > PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
> > +#define PetscCallAbort(c,ierr) if (ierr .ne. 0) then;call 
> > PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
> > #define CHKMEMQ call chkmemfortran(__LINE__,__FILE__,ierr)
> > 
> > Satish
> > 
> > On Sun, 3 Apr 2022, Barry Smith wrote:
> > 
> >> 
> >>   To use the latest version of PETSc, each user needs to remove the error 
> >> checks on these calls. The resulting code will work with previous versions 
> >> of PETSc as well as the current version of PETSc.  PETSc has never 
> >> promised complete backward compatibility in the sense of promising that 
> >> one can use new PETSc releases without any changes to their code; the 
> >> documentation has always stated new releases will contain changes in the 
> >> API. We began using depreciate a few years ago to limit the number of 
> >> changes that needed to be made immediately for each release but depreciate 
> >> is not suitable for all changes and so users do need to make some changes 
> >> for each new release. 
> >> 
> >> 
> >> 
> >> 
> >> 
> >>> On Apr 3, 2022, at 7:23 AM, Lisandro Dalcin  wrote:
> >>> 
> >>> The recent PetscUse/TryMethod changes are backward incompatible. 
> >>> Third-party codes cannot compile without modification. Our users deserve 
> >>> better.
> >>> 
> >>> 
> >>> -- 
> >>> Lisandro Dalcin
> >>> 
> >>> Senior Research Scientist
> >>> Extreme Computing Research Center (ECRC)
> >>> King Abdullah University of Science and Technology (KAUST)
> >>> http://ecrc.kaust.edu.sa/ 

Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev
On Sun, 3 Apr 2022, Satish Balay via petsc-dev wrote:

> Just noticed - CHKERRQ() got removed from fortran interface - breaking 
> pflotran
> 
> https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145624
> 
> [also CHKERRABORT]. Perhaps they can be added back in.
> 
> $ git diff release-3.16..release include/petsc/finclude/petscsys.h
> diff --git a/include/petsc/finclude/petscsys.h 
> b/include/petsc/finclude/petscsys.h
> 
>  #define SETERRABORT(c,ierr,s)  call PetscError(c,ierr,0,s); call 
> MPI_Abort(c,ierr)
> -#define CHKERRQ(ierr) if (ierr .ne. 0) then;call 
> PetscErrorF(ierr);return;endif
> +#define PetscCall(ierr) if (ierr .ne. 0) then;call 
> PetscErrorF(ierr);return;endif
>  #define CHKERRA(ierr) if (ierr .ne. 0) then;call PetscErrorF(ierr);call 
> MPIU_Abort(PETSC_COMM_SELF,ierr);endif
> -#define CHKERRABORT(c,ierr) if (ierr .ne. 0) then;call 
> PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
> +#define PetscCallAbort(c,ierr) if (ierr .ne. 0) then;call 
> PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
>  #define CHKMEMQ call chkmemfortran(__LINE__,__FILE__,ierr)

https://gitlab.com/petsc/petsc/-/merge_requests/5073

Satish


Re: [petsc-dev] PetscUse/TryMethod

2022-04-03 Thread Satish Balay via petsc-dev


Note this  is not just 'users should update their code' issue.
- all packages (that use petsc) would need to do this update
- and this update doesn't always happen - so pakages will stay at old release - 
some might not
- so now we cant build PETSc with both these packages together.

this type of change causes major issues in xsdk ecosystem (depends on how many 
direct/indirect dependencies are on the given package)

Just noticed - CHKERRQ() got removed from fortran interface - breaking pflotran

https://gitlab.com/xsdk-project/spack-xsdk/-/jobs/2285145624

[also CHKERRABORT]. Perhaps they can be added back in.

$ git diff release-3.16..release include/petsc/finclude/petscsys.h
diff --git a/include/petsc/finclude/petscsys.h 
b/include/petsc/finclude/petscsys.h

 #define SETERRABORT(c,ierr,s)  call PetscError(c,ierr,0,s); call 
MPI_Abort(c,ierr)
-#define CHKERRQ(ierr) if (ierr .ne. 0) then;call PetscErrorF(ierr);return;endif
+#define PetscCall(ierr) if (ierr .ne. 0) then;call 
PetscErrorF(ierr);return;endif
 #define CHKERRA(ierr) if (ierr .ne. 0) then;call PetscErrorF(ierr);call 
MPIU_Abort(PETSC_COMM_SELF,ierr);endif
-#define CHKERRABORT(c,ierr) if (ierr .ne. 0) then;call PetscErrorF(ierr);call 
MPI_Abort(c,ierr);endif
+#define PetscCallAbort(c,ierr) if (ierr .ne. 0) then;call 
PetscErrorF(ierr);call MPI_Abort(c,ierr);endif
 #define CHKMEMQ call chkmemfortran(__LINE__,__FILE__,ierr)

Satish

On Sun, 3 Apr 2022, Barry Smith wrote:

> 
>To use the latest version of PETSc, each user needs to remove the error 
> checks on these calls. The resulting code will work with previous versions of 
> PETSc as well as the current version of PETSc.  PETSc has never promised 
> complete backward compatibility in the sense of promising that one can use 
> new PETSc releases without any changes to their code; the documentation has 
> always stated new releases will contain changes in the API. We began using 
> depreciate a few years ago to limit the number of changes that needed to be 
> made immediately for each release but depreciate is not suitable for all 
> changes and so users do need to make some changes for each new release. 
> 
>
> 
> 
> 
> > On Apr 3, 2022, at 7:23 AM, Lisandro Dalcin  wrote:
> > 
> > The recent PetscUse/TryMethod changes are backward incompatible. 
> > Third-party codes cannot compile without modification. Our users deserve 
> > better.
> > 
> > 
> > -- 
> > Lisandro Dalcin
> > 
> > Senior Research Scientist
> > Extreme Computing Research Center (ECRC)
> > King Abdullah University of Science and Technology (KAUST)
> > http://ecrc.kaust.edu.sa/ 
> 
> 



Re: [petsc-dev] PetscSFCount is not compatible with MPI_Count

2022-03-30 Thread Satish Balay via petsc-dev
On Tue, 29 Mar 2022, Junchao Zhang wrote:

> Also, it looks we need a 64-bit CI job on Mac.

pushed a CI update to https://gitlab.com/petsc/petsc/-/merge_requests/5050

Satish


Re: [petsc-dev] PetscSFCount is not compatible with MPI_Count

2022-03-29 Thread Satish Balay via petsc-dev
On Tue, 29 Mar 2022, Junchao Zhang wrote:

> On Tue, Mar 29, 2022 at 4:59 PM Satish Balay via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
> 
> > We do have such builds in CI - don't know why CI didn't catch it.
> >
> > $ grep with-64-bit-indices=1 *.py
> > arch-ci-freebsd-cxx-cmplx-64idx-dbg.py:  '--with-64-bit-indices=1',
> > arch-ci-linux-cuda-double-64idx.py:'--with-64-bit-indices=1',
> > arch-ci-linux-cxx-cmplx-pkgs-64idx.py:  '--with-64-bit-indices=1',
> > arch-ci-linux-pkgs-64idx.py:  '--with-64-bit-indices=1',
> > arch-ci-opensolaris-misc.py:  '--with-64-bit-indices=1',
> >
> > It implies these CI jobs do not have a recent MPI (like MPICH-4.x ) that
> supports MPI-4 large count? It looks we need to have one.

And a Mac

I can't reproduce on linux [even with latest clang]

Satish

> 
> 
> >
> > Satish
> >
> > On Tue, 29 Mar 2022, Fande Kong wrote:
> >
> > > OK, I attached the configure log here so that we have move information.
> > >
> > > I feel like we should do
> > >
> > > typedef MPI_Count PetscSFCount
> > >
> > > Do we have the target of 64-bit-indices with C++ in CI? I was
> > > surprised that I am the only guy who saw this issue
> > >
> > > Thanks,
> > >
> > > Fande
> > >
> > > On Tue, Mar 29, 2022 at 2:50 PM Satish Balay  wrote:
> > >
> > > > What MPI is this? How to reproduce?
> > > >
> > > > Perhaps its best if you can send the relevant logs.
> > > >
> > > > The likely trigger code in sfneighbor.c:
> > > >
> > > > >>>>
> > > > /* A convenience temporary type */
> > > > #if defined(PETSC_HAVE_MPI_LARGE_COUNT) &&
> > defined(PETSC_USE_64BIT_INDICES)
> > > >   typedef PetscInt PetscSFCount;
> > > > #else
> > > >   typedef PetscMPIInt  PetscSFCount;
> > > > #endif
> > > >
> > > > This change is at https://gitlab.com/petsc/petsc/-/commit/c87b50c4628
> > > >
> > > > Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type
> > that
> > > > should go with it which we could use - instead of PetscInt?
> > > >
> > > >
> > > > Perhaps it should be: "typedef log PetscSFCount;"
> > > >
> > > > Satish
> > > >
> > > >
> > > > On Tue, 29 Mar 2022, Fande Kong wrote:
> > > >
> > > > > It seems correct according to
> > > > >
> > > > > #define PETSC_SIZEOF_LONG 8
> > > > >
> > > > > #define PETSC_SIZEOF_LONG_LONG 8
> > > > >
> > > > >
> > > > > Can not convert from "non-constant" to "constant"?
> > > > >
> > > > > Fande
> > > > >
> > > > > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong 
> > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > When building PETSc with 64 bit indices, it seems that
> > PetscSFCount is
> > > > > > 64-bit integer while MPI_Count is still 32 bit.
> > > > > >
> > > > > > typedef long MPI_Count;
> > > > > >
> > > > > > typedef PetscInt   PetscSFCount;
> > > > > >
> > > > > >
> > > > > >  I had the following errors. Do I have a bad MPI?
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Fande
> > > > > >
> > > > > >
> > > > > >
> > > >
> > Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:
> > > > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> > > > > >
> > > > > >
> > > >
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));
> > > > > >
> > > > > >
> > > >
> > ^~~~
> > > > > >
> > > >
> > /Users/kongf/projects/moose6/petsc

Re: [petsc-dev] PetscSFCount is not compatible with MPI_Count

2022-03-29 Thread Satish Balay via petsc-dev
I'm not sure why we have PetscSFCount - and not always use MPI_Count.

Maybe this would work?

Perhaps Junchao can clarify

Satish

---

diff --git a/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c 
b/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c
index 5dc2e8c0b2..10f42fc302 100644
--- a/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c
+++ b/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c
@@ -1,12 +1,7 @@
 #include <../src/vec/is/sf/impls/basic/sfpack.h>
 #include <../src/vec/is/sf/impls/basic/sfbasic.h>
 
-/* A convenience temporary type */
-#if defined(PETSC_HAVE_MPI_LARGE_COUNT) && defined(PETSC_USE_64BIT_INDICES)
-  typedef PetscInt PetscSFCount;
-#else
-  typedef PetscMPIInt  PetscSFCount;
-#endif
+typedef MPI_Count PetscSFCount;
 
 typedef struct {
   SFBASICHEADER;


On Tue, 29 Mar 2022, Fande Kong wrote:

> OK, this works for me.
> 
> (moose) kongf@FN428781 petsc1 % git diff
> 
> *diff --git a/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c
> b/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c*
> 
> *index 5dc2e8c0b2..c2cc72dfa9 100644*
> 
> *--- a/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c*
> 
> *+++ b/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c*
> 
> @@ -3,7 +3,7 @@
> 
> 
> 
>  /* A convenience temporary type */
> 
>  #if defined(PETSC_HAVE_MPI_LARGE_COUNT) && defined(PETSC_USE_64BIT_INDICES)
> 
> -  typedef PetscInt PetscSFCount;
> 
> +  typedef MPI_Count PetscSFCount;
> 
>  #else
> 
>typedef PetscMPIInt  PetscSFCount;
> 
>  #endif
> 
> On Tue, Mar 29, 2022 at 3:49 PM Fande Kong  wrote:
> 
> > OK, I attached the configure log here so that we have move information.
> >
> > I feel like we should do
> >
> > typedef MPI_Count PetscSFCount
> >
> > Do we have the target of 64-bit-indices with C++ in CI? I was
> > surprised that I am the only guy who saw this issue
> >
> > Thanks,
> >
> > Fande
> >
> > On Tue, Mar 29, 2022 at 2:50 PM Satish Balay  wrote:
> >
> >> What MPI is this? How to reproduce?
> >>
> >> Perhaps its best if you can send the relevant logs.
> >>
> >> The likely trigger code in sfneighbor.c:
> >>
> >> 
> >> /* A convenience temporary type */
> >> #if defined(PETSC_HAVE_MPI_LARGE_COUNT) &&
> >> defined(PETSC_USE_64BIT_INDICES)
> >>   typedef PetscInt PetscSFCount;
> >> #else
> >>   typedef PetscMPIInt  PetscSFCount;
> >> #endif
> >>
> >> This change is at https://gitlab.com/petsc/petsc/-/commit/c87b50c4628
> >>
> >> Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type that
> >> should go with it which we could use - instead of PetscInt?
> >>
> >>
> >> Perhaps it should be: "typedef log PetscSFCount;"
> >>
> >> Satish
> >>
> >>
> >> On Tue, 29 Mar 2022, Fande Kong wrote:
> >>
> >> > It seems correct according to
> >> >
> >> > #define PETSC_SIZEOF_LONG 8
> >> >
> >> > #define PETSC_SIZEOF_LONG_LONG 8
> >> >
> >> >
> >> > Can not convert from "non-constant" to "constant"?
> >> >
> >> > Fande
> >> >
> >> > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong  wrote:
> >> >
> >> > > Hi All,
> >> > >
> >> > > When building PETSc with 64 bit indices, it seems that PetscSFCount is
> >> > > 64-bit integer while MPI_Count is still 32 bit.
> >> > >
> >> > > typedef long MPI_Count;
> >> > >
> >> > > typedef PetscInt   PetscSFCount;
> >> > >
> >> > >
> >> > >  I had the following errors. Do I have a bad MPI?
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Fande
> >> > >
> >> > >
> >> > >
> >> Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:
> >> > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> >> > >
> >> > >
> >> PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));
> >> > >
> >> > >
> >> ^~~~
> >> > >
> >> /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> >> > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> >> > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> >> > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> >> > >
> >> > > ^
> >> > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:
> >> > > expanded from macro 'PetscCallMPI'
> >> > > PetscMPIInt _7_errorcode = __VA_ARGS__;
> >> > >  \
> >> > >^~~
> >> > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:
> >> > > candidate function not viable: no known conversion from 'PetscSFCount
> >> *'
> >> > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for
> >> 2nd
> >> > > argument
> >> > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> >> > > sendcounts[],
> >> > > ^
> >> > >
> >> /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18:
> >> > > error: no matching function for 

Re: [petsc-dev] PetscSFCount is not compatible with MPI_Count

2022-03-29 Thread Satish Balay via petsc-dev
We do have such builds in CI - don't know why CI didn't catch it.

$ grep with-64-bit-indices=1 *.py
arch-ci-freebsd-cxx-cmplx-64idx-dbg.py:  '--with-64-bit-indices=1',
arch-ci-linux-cuda-double-64idx.py:'--with-64-bit-indices=1',
arch-ci-linux-cxx-cmplx-pkgs-64idx.py:  '--with-64-bit-indices=1',
arch-ci-linux-pkgs-64idx.py:  '--with-64-bit-indices=1',
arch-ci-opensolaris-misc.py:  '--with-64-bit-indices=1',


Satish

On Tue, 29 Mar 2022, Fande Kong wrote:

> OK, I attached the configure log here so that we have move information.
> 
> I feel like we should do
> 
> typedef MPI_Count PetscSFCount
> 
> Do we have the target of 64-bit-indices with C++ in CI? I was
> surprised that I am the only guy who saw this issue
> 
> Thanks,
> 
> Fande
> 
> On Tue, Mar 29, 2022 at 2:50 PM Satish Balay  wrote:
> 
> > What MPI is this? How to reproduce?
> >
> > Perhaps its best if you can send the relevant logs.
> >
> > The likely trigger code in sfneighbor.c:
> >
> > 
> > /* A convenience temporary type */
> > #if defined(PETSC_HAVE_MPI_LARGE_COUNT) && defined(PETSC_USE_64BIT_INDICES)
> >   typedef PetscInt PetscSFCount;
> > #else
> >   typedef PetscMPIInt  PetscSFCount;
> > #endif
> >
> > This change is at https://gitlab.com/petsc/petsc/-/commit/c87b50c4628
> >
> > Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type that
> > should go with it which we could use - instead of PetscInt?
> >
> >
> > Perhaps it should be: "typedef log PetscSFCount;"
> >
> > Satish
> >
> >
> > On Tue, 29 Mar 2022, Fande Kong wrote:
> >
> > > It seems correct according to
> > >
> > > #define PETSC_SIZEOF_LONG 8
> > >
> > > #define PETSC_SIZEOF_LONG_LONG 8
> > >
> > >
> > > Can not convert from "non-constant" to "constant"?
> > >
> > > Fande
> > >
> > > On Tue, Mar 29, 2022 at 2:22 PM Fande Kong  wrote:
> > >
> > > > Hi All,
> > > >
> > > > When building PETSc with 64 bit indices, it seems that PetscSFCount is
> > > > 64-bit integer while MPI_Count is still 32 bit.
> > > >
> > > > typedef long MPI_Count;
> > > >
> > > > typedef PetscInt   PetscSFCount;
> > > >
> > > >
> > > >  I had the following errors. Do I have a bad MPI?
> > > >
> > > > Thanks,
> > > >
> > > > Fande
> > > >
> > > >
> > > >
> > Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:
> > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> > > >
> > > >
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));
> > > >
> > > >
> > ^~~~
> > > >
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> > > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> > > >
> > > > ^
> > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:
> > > > expanded from macro 'PetscCallMPI'
> > > > PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > >  \
> > > >^~~
> > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:
> > > > candidate function not viable: no known conversion from 'PetscSFCount
> > *'
> > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd
> > > > argument
> > > > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> > > > sendcounts[],
> > > > ^
> > > >
> > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18:
> > > > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> > > >
> > > >
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(leafbuf,dat->leafcounts,dat->leafdispls,unit,rootbuf,dat->rootcounts,dat->rootdispls,unit,distcomm,req));
> > > >
> > > >
> > ^~~~
> > > >
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > > > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> > > >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > > > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> > > >
> > > > ^
> > > > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:
> > > > expanded from macro 'PetscCallMPI'
> > > > PetscMPIInt _7_errorcode = __VA_ARGS__;
> > > >  \
> > > >^~~
> > > > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:
> > > > candidate function not viable: no known conversion from 'PetscSFCount
> > *'
> > > > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd
> > > > argument
> > > > int 

Re: [petsc-dev] PetscSFCount is not compatible with MPI_Count

2022-03-29 Thread Satish Balay via petsc-dev
What MPI is this? How to reproduce?

Perhaps its best if you can send the relevant logs.

The likely trigger code in sfneighbor.c: 


/* A convenience temporary type */
#if defined(PETSC_HAVE_MPI_LARGE_COUNT) && defined(PETSC_USE_64BIT_INDICES)
  typedef PetscInt PetscSFCount;
#else
  typedef PetscMPIInt  PetscSFCount;
#endif

This change is at https://gitlab.com/petsc/petsc/-/commit/c87b50c4628

Hm - if MPI supported LARGE_COUNT - perhaps it also provides a type that should 
go with it which we could use - instead of PetscInt?


Perhaps it should be: "typedef log PetscSFCount;"

Satish


On Tue, 29 Mar 2022, Fande Kong wrote:

> It seems correct according to
> 
> #define PETSC_SIZEOF_LONG 8
> 
> #define PETSC_SIZEOF_LONG_LONG 8
> 
> 
> Can not convert from "non-constant" to "constant"?
> 
> Fande
> 
> On Tue, Mar 29, 2022 at 2:22 PM Fande Kong  wrote:
> 
> > Hi All,
> >
> > When building PETSc with 64 bit indices, it seems that PetscSFCount is
> > 64-bit integer while MPI_Count is still 32 bit.
> >
> > typedef long MPI_Count;
> >
> > typedef PetscInt   PetscSFCount;
> >
> >
> >  I had the following errors. Do I have a bad MPI?
> >
> > Thanks,
> >
> > Fande
> >
> >
> > Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:171:18:
> > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> >
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,distcomm,req));
> >
> >  
> > ^~~~
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> >
> > ^
> > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:
> > expanded from macro 'PetscCallMPI'
> > PetscMPIInt _7_errorcode = __VA_ARGS__;
> >  \
> >^~~
> > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:
> > candidate function not viable: no known conversion from 'PetscSFCount *'
> > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd
> > argument
> > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> > sendcounts[],
> > ^
> > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:195:18:
> > error: no matching function for call to 'MPI_Ineighbor_alltoallv_c'
> >
> > PetscCallMPI(MPIU_Ineighbor_alltoallv(leafbuf,dat->leafcounts,dat->leafdispls,unit,rootbuf,dat->rootcounts,dat->rootdispls,unit,distcomm,req));
> >
> >  
> > ^~~~
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:97:79:
> > note: expanded from macro 'MPIU_Ineighbor_alltoallv'
> >   #define MPIU_Ineighbor_alltoallv(a,b,c,d,e,f,g,h,i,j)
> > MPI_Ineighbor_alltoallv_c(a,b,c,d,e,f,g,h,i,j)
> >
> > ^
> > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:
> > expanded from macro 'PetscCallMPI'
> > PetscMPIInt _7_errorcode = __VA_ARGS__;
> >  \
> >^~~
> > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:945:5: note:
> > candidate function not viable: no known conversion from 'PetscSFCount *'
> > (aka 'long long *') to 'const MPI_Count *' (aka 'const long *') for 2nd
> > argument
> > int MPI_Ineighbor_alltoallv_c(const void *sendbuf, const MPI_Count
> > sendcounts[],
> > ^
> > /Users/kongf/projects/moose6/petsc1/src/vec/is/sf/impls/basic/neighbor/sfneighbor.c:240:18:
> > error: no matching function for call to 'MPI_Neighbor_alltoallv_c'
> >
> > PetscCallMPI(MPIU_Neighbor_alltoallv(rootbuf,dat->rootcounts,dat->rootdispls,unit,leafbuf,dat->leafcounts,dat->leafdispls,unit,comm));
> >
> >  
> > ^~~
> > /Users/kongf/projects/moose6/petsc1/include/petsc/private/mpiutils.h:96:79:
> > note: expanded from macro 'MPIU_Neighbor_alltoallv'
> >   #define MPIU_Neighbor_alltoallv(a,b,c,d,e,f,g,h,i)
> >MPI_Neighbor_alltoallv_c(a,b,c,d,e,f,g,h,i)
> >
> > ^~~~
> > /Users/kongf/projects/moose6/petsc1/include/petscerror.h:407:32: note:
> > expanded from macro 'PetscCallMPI'
> > PetscMPIInt _7_errorcode = __VA_ARGS__;
> >  \
> >^~~
> > /Users/kongf/mambaforge3/envs/moose/include/mpi_proto.h:1001:5: note:
> > candidate function not viable: no known conversion from 'PetscSFCount *'
> > (aka 'long long *') to 'const 

Re: [petsc-dev] petsc release plan for Mar/2022

2022-03-25 Thread Satish Balay via petsc-dev
A reminder!

Currently I see the following MRs with v3.17-release milestone

Satish

---

Cleaning up some Plex I/O
!5013 · created 2 days ago by Matthew Knepley   v3.17-release  

Change a bunch of PetscCheckFalse() to PetscCheck() in pc
!5000 · created 3 days ago by Barry Smith   v3.17-release  

Support COO for MatHypre
!5023 · created 10 hours ago by Junchao Zhang   v3.17-release  

Plex: Enhance extrusion
!5019 · created 21 hours ago by Matthew Knepley   v3.17-release  

Draft: SNES: delete empty examples   0 of 2 tasks completed
!5024 · created 4 hours ago by Vaclav Hapla   v3.17-release  

PetscSection: Lazy maxDof calculation.
!5025 · created 3 hours ago by Vaclav Hapla   v3.17-release  

Mat_MPIAIJCUSPARSE does not need to have its own stream or cuSparse handle
!5020 · created 17 hours ago by Junchao Zhang   v3.17-release  

Draft: configure: update packages for 3.17.0
!4982 · created 1 week ago by Pierre Jolivet   v3.17-release  

Draft: release: set petsc v3.17.0 strings
!5015 · created 1 day ago by Satish Balay   v3.17-release

Remove use of PetscCheckFalse() on a subset of examples
!5006 · created 2 days ago by Barry Smith   v3.17-release  

dmnetwork rebalance vertices
!4945 · created 2 weeks ago by Getnet Betrie   v3.17-release  


On Thu, 3 Mar 2022, Satish Balay via petsc-dev wrote:

> All,
> 
> Its time for another PETSc release - due end of March.
> 
> For this release [3.17], lets work with the following dates:
> 
> - feature freeze: March 28 say 5PM EST
> - release: March 30 say 5PM EST
> 
> Merges after freeze should contain only fixes that would normally be 
> acceptable to "release" work-flow.
> 
> I've created a new milestone 'v3.17-release'. So if you are working on a MR 
> with the goal of merging before release - its best to use this tag with the 
> MR.
> 
> And it would be good to avoid merging large changes at the last minute. And 
> not have merge requests stuck in need of reviews, testing and other necessary 
> tasks.
> 
> And I would think the testing/CI resources would get stressed in this 
> timeframe - so it would be good to use them judiciously if possible.
> 
> - if there are failures in stage-2 or 3 - and its no longer necessary to 
> complete all the jobs - one can 'cancel' the pipeline.
> - if a fix needs to be tested - one can first test with only the failed jobs 
> (if this is known) - before doing a full test pipeline. i.e:
>- use the automatically started and paused 'merge-request' pipeline (or 
> start new 'web' pipeline, and cancel it immediately)
>- now toggle only the jobs that need to be run
>- [on success of the selected jobs] if one wants to run the full pipeleine 
> - click 'retry' - and the remaining canceled jobs should now get scheduled.
> 
> Thanks,
> Satish
> 


[petsc-dev] petsc-3.16.5 now available

2022-03-04 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.16.5 is now available for download.

https://petsc.org/release/download/

Satish





[petsc-dev] petsc release plan for Mar/2022

2022-03-03 Thread Satish Balay via petsc-dev
All,

Its time for another PETSc release - due end of March.

For this release [3.17], lets work with the following dates:

- feature freeze: March 28 say 5PM EST
- release: March 30 say 5PM EST

Merges after freeze should contain only fixes that would normally be acceptable 
to "release" work-flow.

I've created a new milestone 'v3.17-release'. So if you are working on a MR 
with the goal of merging before release - its best to use this tag with the MR.

And it would be good to avoid merging large changes at the last minute. And not 
have merge requests stuck in need of reviews, testing and other necessary tasks.

And I would think the testing/CI resources would get stressed in this timeframe 
- so it would be good to use them judiciously if possible.

- if there are failures in stage-2 or 3 - and its no longer necessary to 
complete all the jobs - one can 'cancel' the pipeline.
- if a fix needs to be tested - one can first test with only the failed jobs 
(if this is known) - before doing a full test pipeline. i.e:
   - use the automatically started and paused 'merge-request' pipeline (or 
start new 'web' pipeline, and cancel it immediately)
   - now toggle only the jobs that need to be run
   - [on success of the selected jobs] if one wants to run the full pipeleine - 
click 'retry' - and the remaining canceled jobs should now get scheduled.

Thanks,
Satish



[petsc-dev] petsc-3.16.4 now available

2022-02-02 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.16.4 is now available for download.

https://petsc.org/release/download/

Satish





Re: [petsc-dev] ftn-auto in $PETSC_DIR/include ?

2022-01-27 Thread Satish Balay via petsc-dev
The change is at  https://gitlab.com/petsc/petsc/-/merge_requests/4770

Satish

On Thu, 27 Jan 2022, Satish Balay via petsc-dev wrote:

> And the source inside include don't get built.
> 
> I guess the fix is to switch these stubs to custom [and move the sources to 
> src/sys/logging/ftn-custom/]
> 
> Will do.
> 
> Satish
> 
> On Thu, 27 Jan 2022, Stefano Zampini wrote:
> 
> > This is a bug and it should be fixed
> > 
> > Il giorno gio 27 gen 2022 alle ore 15:22 Jose E. Roman 
> > ha scritto:
> > 
> > > That is because PetscLogFlops() has an auto fortran stub. This is a
> > > PETSC_STATIC_INLINE function in include/petsclog.h
> > >
> > > Jose
> > >
> > >
> > > > El 27 ene 2022, a las 13:15, Stefano Zampini 
> > > escribió:
> > > >
> > > > Just noticed this. Is it normal to have a ftn-auto directory generated
> > > by bfort in $PETSC_DIR/include?
> > > >
> > > > (ecrcml-user) [szampini@localhost petsc]$ ls include/ftn-auto
> > > > makefile  petscloghf.c
> > > >
> > > >
> > > > --
> > > > Stefano
> > >
> > >
> > 
> > 
> 


Re: [petsc-dev] ftn-auto in $PETSC_DIR/include ?

2022-01-27 Thread Satish Balay via petsc-dev
And the source inside include don't get built.

I guess the fix is to switch these stubs to custom [and move the sources to 
src/sys/logging/ftn-custom/]

Will do.

Satish

On Thu, 27 Jan 2022, Stefano Zampini wrote:

> This is a bug and it should be fixed
> 
> Il giorno gio 27 gen 2022 alle ore 15:22 Jose E. Roman 
> ha scritto:
> 
> > That is because PetscLogFlops() has an auto fortran stub. This is a
> > PETSC_STATIC_INLINE function in include/petsclog.h
> >
> > Jose
> >
> >
> > > El 27 ene 2022, a las 13:15, Stefano Zampini 
> > escribió:
> > >
> > > Just noticed this. Is it normal to have a ftn-auto directory generated
> > by bfort in $PETSC_DIR/include?
> > >
> > > (ecrcml-user) [szampini@localhost petsc]$ ls include/ftn-auto
> > > makefile  petscloghf.c
> > >
> > >
> > > --
> > > Stefano
> >
> >
> 
> 


[petsc-dev] petsc-3.16.3 now available

2022-01-05 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.16.3 is now available for download.

https://petsc.org/release/download/

Satish





Re: [petsc-dev] I think the Windows machine has fallen over...

2021-12-13 Thread Satish Balay via petsc-dev
The box is rebooted now.

Satish

On Mon, 13 Dec 2021, Matthew Knepley wrote:

> https://gitlab.com/petsc/petsc/-/jobs/1879771695
> 
>   Thanks,
> 
>  Matt
> 
> 



Re: [petsc-dev] spock

2021-12-10 Thread Satish Balay via petsc-dev
On Fri, 10 Dec 2021, Mark Adams wrote:

> I was able to run a parallel test manually.
> 
> Do you have any thoughts on Kokkos?
> '--with-kokkos-hip-arch=VEGA908',

Configure sets that automatically from --with-hip-arch - which is auto-detected 
from 'rocminfo' [which appears to work on spock]

Perhaps it should also set --with-magma-gputarget the same way.

> On Fri, Dec 10, 2021 at 11:08 AM Mark Adams  wrote:
> 
> > It seems to be hanging on the 2 processor test.
> > I'll try running jobs manually.


Hm - perhaps the srun command you need is different?

'--with-mpiexec=srun -p ecp -N 1 -A csc314 -t 00:10:00'

Satish

> >
> > On Fri, Dec 10, 2021 at 9:34 AM Satish Balay  wrote:
> >
> >> Merged now. And the following now works [for me].
> >>
> >>  1025  git fetch -p
> >>  1026  git checkout origin/main
> >>  1027  ./config/examples/arch-olcf-spock.py && make
> >>  1028  MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0 MPICH_GPU_SUPPORT_ENABLED=1
> >> MPICH_SMP_SINGLE_COPY_MODE=CMA make check
> >>
> >> Satish
> >>
> >> On Fri, 10 Dec 2021, Satish Balay via petsc-dev wrote:
> >>
> >> > Works for me [per instructions in balay/update-spock,
> >> config/examples/arch-olcf-spock.py] with main - without these additional
> >> options
> >> >
> >> > I'll go ahead and merge in balay/update-spock
> >> >
> >> > Satish
> >> >
> >> > -
> >> >
> >> >  1009  git fetch -p
> >> >  1015  module load emacs
> >> >  1016  module load rocm/4.3.0
> >> >  1018  git reset --hard
> >> >  1019  git checkout origin/main
> >> >  1020  git merge origin/balay/update-spock
> >> >  1021  ./config/examples/arch-olcf-spock.py && make
> >> >
> >> >
> >> >
> >> > [balay@login2.spock petsc]$ MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0
> >> MPICH_GPU_SUPPORT_ENABLED=1 MPICH_SMP_SINGLE_COPY_MODE=CMA make check
> >> > Running check examples to verify correct installation
> >> > Using PETSC_DIR=/autofs/nccs-svm1_home1/balay/petsc and
> >> PETSC_ARCH=arch-olcf-spock
> >> > C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI
> >> process
> >> > C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI
> >> processes
> >> > C/C++ example src/snes/tutorials/ex3k run successfully with
> >> kokkos-kernels
> >> > ***Error detected during compile or
> >> link!***
> >> > See http://www.mcs.anl.gov/petsc/documentation/faq.html
> >> > /ccs/home/balay/petsc/src/snes/tutorials ex5f
> >> > *
> >> > ftn -fPIC   -fPIC-I/autofs/nccs-svm1_home1/balay/petsc/include
> >> -I/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/include
> >> -I/opt/rocm-4.3.0/include ex5f.F90
> >> -Wl,-rpath,/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib
> >> -L/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib
> >> -Wl,-rpath,/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib
> >> -L/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib
> >> -Wl,-rpath,/opt/rocm-4.3.0/lib -L/opt/rocm-4.3.0/lib
> >> -Wl,-rpath,/opt/cray/pe/mpich/8.1.10/gtl/lib
> >> -L/opt/cray/pe/mpich/8.1.10/gtl/lib
> >> -Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib64
> >> -L/opt/cray/pe/gcc/8.1.0/snos/lib64 -Wl,-rpath,/opt/cray/pe/libsci/
> >> 21.08.1.2/CRAY/9.0/x86_64/lib -L/opt/cray/pe/libsci/
> >> 21.08.1.2/CRAY/9.0/x86_64/lib
> >> -Wl,-rpath,/opt/cray/pe/mpich/8.1.10/ofi/cray/10.0/lib
> >> -L/opt/cray/pe/mpich/8.1.10/ofi/cray/10.0/lib
> >> -Wl,-rpath,/opt/cray/pe/dsmml/0.2.2/dsmml/lib
> >> -L/opt/cray/pe/dsmml/0.2.2/dsmml/lib -Wl,-rpath,/opt/cray/pe/pmi/6.0.14/lib
> >> -L/opt/cray/pe/pmi/6
> >> >  .0.14/li
> >> >  b -Wl,-rpath,/opt/cray/pe/cce/12.0.3/cce/x86_64/lib
> >> -L/opt/cray/pe/cce/12.0.3/cce/x86_64/lib
> >> -Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_2.44__g3cf3325.shasta/lib64
> >> -L/opt/cray/xpmem/2.2.40-2.1_2.44__g3cf3325.shasta/lib64
> >> -Wl,-rpath,/opt/cray/pe/cce/12.0.3/cce-clang/x86_64/lib/clang/12.0.0/lib/linux
> >> -L/opt/cray/pe/cce/12.0.3/cce-clang/x86_64/lib/clang/12.0.0/lib/linux
> >> -Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0
> >> -L/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0
> >> -Wl,-rpath,/opt/cray/pe/cce

Re: [petsc-dev] spock

2021-12-10 Thread Satish Balay via petsc-dev
Merged now. And the following now works [for me].

 1025  git fetch -p
 1026  git checkout origin/main
 1027  ./config/examples/arch-olcf-spock.py && make
 1028  MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0 MPICH_GPU_SUPPORT_ENABLED=1 
MPICH_SMP_SINGLE_COPY_MODE=CMA make check

Satish

On Fri, 10 Dec 2021, Satish Balay via petsc-dev wrote:

> Works for me [per instructions in balay/update-spock, 
> config/examples/arch-olcf-spock.py] with main - without these additional 
> options
> 
> I'll go ahead and merge in balay/update-spock
> 
> Satish
> 
> -
> 
>  1009  git fetch -p
>  1015  module load emacs
>  1016  module load rocm/4.3.0
>  1018  git reset --hard
>  1019  git checkout origin/main
>  1020  git merge origin/balay/update-spock
>  1021  ./config/examples/arch-olcf-spock.py && make
> 
> 
> 
> [balay@login2.spock petsc]$ MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0 
> MPICH_GPU_SUPPORT_ENABLED=1 MPICH_SMP_SINGLE_COPY_MODE=CMA make check
> Running check examples to verify correct installation
> Using PETSC_DIR=/autofs/nccs-svm1_home1/balay/petsc and 
> PETSC_ARCH=arch-olcf-spock
> C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process
> C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes
> C/C++ example src/snes/tutorials/ex3k run successfully with kokkos-kernels
> ***Error detected during compile or link!***
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
> /ccs/home/balay/petsc/src/snes/tutorials ex5f
> *
> ftn -fPIC   -fPIC-I/autofs/nccs-svm1_home1/balay/petsc/include 
> -I/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/include 
> -I/opt/rocm-4.3.0/include ex5f.F90  
> -Wl,-rpath,/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
> -L/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
> -Wl,-rpath,/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
> -L/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
> -Wl,-rpath,/opt/rocm-4.3.0/lib -L/opt/rocm-4.3.0/lib 
> -Wl,-rpath,/opt/cray/pe/mpich/8.1.10/gtl/lib 
> -L/opt/cray/pe/mpich/8.1.10/gtl/lib 
> -Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib64 
> -L/opt/cray/pe/gcc/8.1.0/snos/lib64 
> -Wl,-rpath,/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib 
> -L/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib 
> -Wl,-rpath,/opt/cray/pe/mpich/8.1.10/ofi/cray/10.0/lib 
> -L/opt/cray/pe/mpich/8.1.10/ofi/cray/10.0/lib 
> -Wl,-rpath,/opt/cray/pe/dsmml/0.2.2/dsmml/lib 
> -L/opt/cray/pe/dsmml/0.2.2/dsmml/lib -Wl,-rpath,/opt/cray/pe/pmi/6.0.14/lib 
> -L/opt/cray/pe/pmi
 /6
>  .0.14/li
>  b -Wl,-rpath,/opt/cray/pe/cce/12.0.3/cce/x86_64/lib 
> -L/opt/cray/pe/cce/12.0.3/cce/x86_64/lib 
> -Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_2.44__g3cf3325.shasta/lib64 
> -L/opt/cray/xpmem/2.2.40-2.1_2.44__g3cf3325.shasta/lib64 
> -Wl,-rpath,/opt/cray/pe/cce/12.0.3/cce-clang/x86_64/lib/clang/12.0.0/lib/linux
>  -L/opt/cray/pe/cce/12.0.3/cce-clang/x86_64/lib/clang/12.0.0/lib/linux 
> -Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 
> -L/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 
> -Wl,-rpath,/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-unknown-linux-gnu/lib
>  -L/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-unknown-linux-gnu/lib 
> -lpetsc -lmagma -lkokkoskernels -lkokkoscontainers -lkokkoscore -lhipsparse 
> -lhipblas -lrocsparse -lrocsolver -lrocblas -lrocrand -lamdhip64 -lstdc++ 
> -ldl -lmpi_gtl_hsa -lmpifort_cray -lmpi_cray -ldsmml -lpmi -lpmi2 -lxpmem 
> -lpgas-shmem -lquadmath -lmodules -lfi -lcraymath -lf -lu -lcsup -lgfortran 
> -lpthread -lgcc_eh -lm -lclang_rt.cray
 pg
>  o-x86_64
>   -lclang_rt.builtins-x86_64 -lquadmath -lstdc++ -ldl -lmpi_gtl_hsa -o 
> ex5f/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: 
> warning: alignment 128 of symbol 
> `$host_init$$runtime_init_for_iso_c_binding$iso_c_binding_' in 
> /opt/cray/pe/cce/12.0.3/cce/x86_64/lib/libmodules.so is smaller than 256 in 
> /tmp/pe_202599/ex5f_1.o
> /opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: warning: 
> alignment 64 of symbol `$data_init$iso_c_binding_' in 
> /opt/cray/pe/cce/12.0.3/cce/x86_64/lib/libmodules.so is smaller than 256 in 
> /tmp/pe_202599/ex5f_1.o
> Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process
> Completed test examples
> [balay@login2.spock petsc]$ 
> 
> 
> On Fri, 10 Dec 2021, Mark Adams wrote:
> 
> > FWIW,  here is my current status.
> > 
> > 08:08 main= spock:/gpfs/alpine/csc314/scratch/adams/petsc$ make
> > PETSC_DIR=/gpfs/alpine/csc314/scratch/adams/petsc
> > PETSC_ARCH=arch-olcf-spock check
> > Running check examples to verify c

Re: [petsc-dev] spock

2021-12-10 Thread Satish Balay via petsc-dev
Works for me [per instructions in balay/update-spock, 
config/examples/arch-olcf-spock.py] with main - without these additional options

I'll go ahead and merge in balay/update-spock

Satish

-

 1009  git fetch -p
 1015  module load emacs
 1016  module load rocm/4.3.0
 1018  git reset --hard
 1019  git checkout origin/main
 1020  git merge origin/balay/update-spock
 1021  ./config/examples/arch-olcf-spock.py && make



[balay@login2.spock petsc]$ MPIR_CVAR_GPU_EAGER_DEVICE_MEM=0 
MPICH_GPU_SUPPORT_ENABLED=1 MPICH_SMP_SINGLE_COPY_MODE=CMA make check
Running check examples to verify correct installation
Using PETSC_DIR=/autofs/nccs-svm1_home1/balay/petsc and 
PETSC_ARCH=arch-olcf-spock
C/C++ example src/snes/tutorials/ex19 run successfully with 1 MPI process
C/C++ example src/snes/tutorials/ex19 run successfully with 2 MPI processes
C/C++ example src/snes/tutorials/ex3k run successfully with kokkos-kernels
***Error detected during compile or link!***
See http://www.mcs.anl.gov/petsc/documentation/faq.html
/ccs/home/balay/petsc/src/snes/tutorials ex5f
*
ftn -fPIC   -fPIC-I/autofs/nccs-svm1_home1/balay/petsc/include 
-I/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/include 
-I/opt/rocm-4.3.0/include ex5f.F90  
-Wl,-rpath,/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
-L/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
-Wl,-rpath,/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
-L/autofs/nccs-svm1_home1/balay/petsc/arch-olcf-spock/lib 
-Wl,-rpath,/opt/rocm-4.3.0/lib -L/opt/rocm-4.3.0/lib 
-Wl,-rpath,/opt/cray/pe/mpich/8.1.10/gtl/lib 
-L/opt/cray/pe/mpich/8.1.10/gtl/lib 
-Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib64 
-L/opt/cray/pe/gcc/8.1.0/snos/lib64 
-Wl,-rpath,/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib 
-L/opt/cray/pe/libsci/21.08.1.2/CRAY/9.0/x86_64/lib 
-Wl,-rpath,/opt/cray/pe/mpich/8.1.10/ofi/cray/10.0/lib 
-L/opt/cray/pe/mpich/8.1.10/ofi/cray/10.0/lib 
-Wl,-rpath,/opt/cray/pe/dsmml/0.2.2/dsmml/lib 
-L/opt/cray/pe/dsmml/0.2.2/dsmml/lib -Wl,-rpath,/opt/cray/pe/pmi/6.0.14/lib 
-L/opt/cray/pe/pmi/6
 .0.14/li
 b -Wl,-rpath,/opt/cray/pe/cce/12.0.3/cce/x86_64/lib 
-L/opt/cray/pe/cce/12.0.3/cce/x86_64/lib 
-Wl,-rpath,/opt/cray/xpmem/2.2.40-2.1_2.44__g3cf3325.shasta/lib64 
-L/opt/cray/xpmem/2.2.40-2.1_2.44__g3cf3325.shasta/lib64 
-Wl,-rpath,/opt/cray/pe/cce/12.0.3/cce-clang/x86_64/lib/clang/12.0.0/lib/linux 
-L/opt/cray/pe/cce/12.0.3/cce-clang/x86_64/lib/clang/12.0.0/lib/linux 
-Wl,-rpath,/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 
-L/opt/cray/pe/gcc/8.1.0/snos/lib/gcc/x86_64-suse-linux/8.1.0 
-Wl,-rpath,/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-unknown-linux-gnu/lib 
-L/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-unknown-linux-gnu/lib -lpetsc 
-lmagma -lkokkoskernels -lkokkoscontainers -lkokkoscore -lhipsparse -lhipblas 
-lrocsparse -lrocsolver -lrocblas -lrocrand -lamdhip64 -lstdc++ -ldl 
-lmpi_gtl_hsa -lmpifort_cray -lmpi_cray -ldsmml -lpmi -lpmi2 -lxpmem 
-lpgas-shmem -lquadmath -lmodules -lfi -lcraymath -lf -lu -lcsup -lgfortran 
-lpthread -lgcc_eh -lm -lclang_rt.craypg
 o-x86_64
  -lclang_rt.builtins-x86_64 -lquadmath -lstdc++ -ldl -lmpi_gtl_hsa -o 
ex5f/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: 
warning: alignment 128 of symbol 
`$host_init$$runtime_init_for_iso_c_binding$iso_c_binding_' in 
/opt/cray/pe/cce/12.0.3/cce/x86_64/lib/libmodules.so is smaller than 256 in 
/tmp/pe_202599/ex5f_1.o
/opt/cray/pe/cce/12.0.3/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: warning: 
alignment 64 of symbol `$data_init$iso_c_binding_' in 
/opt/cray/pe/cce/12.0.3/cce/x86_64/lib/libmodules.so is smaller than 256 in 
/tmp/pe_202599/ex5f_1.o
Fortran example src/snes/tutorials/ex5f run successfully with 1 MPI process
Completed test examples
[balay@login2.spock petsc]$ 


On Fri, 10 Dec 2021, Mark Adams wrote:

> FWIW,  here is my current status.
> 
> 08:08 main= spock:/gpfs/alpine/csc314/scratch/adams/petsc$ make
> PETSC_DIR=/gpfs/alpine/csc314/scratch/adams/petsc
> PETSC_ARCH=arch-olcf-spock check
> Running check examples to verify correct installation
> Using PETSC_DIR=/gpfs/alpine/csc314/scratch/adams/petsc and
> PETSC_ARCH=arch-olcf-spock
> Possible error running C/C++ src/snes/tutorials/ex19 with 1 MPI process
> See http://www.mcs.anl.gov/petsc/documentation/faq.html
> lid velocity = 0.0016, prandtl # = 1., grashof # = 1.
> 0 KSP Residual norm 0.0406612
> 1 KSP Residual norm 0.036923
> 2 KSP Residual norm 0.0191849
> 3 KSP Residual norm 0.00201589
> 4 KSP Residual norm 0.000376045
> 5 KSP Residual norm 4.2974e-05
> 6 KSP Residual norm 5.96585e-06
> 7 KSP Residual norm 4.5398e-07
> 8 KSP Residual norm 6.30474e-08
> 9 KSP Residual norm 5.55518e-09
>10 KSP Residual norm 6.180e-10
>11 KSP Residual norm 6.211e-11
>   Linear solve converged due to CONVERGED_RTOL iterations 11
> 0 KSP Residual 

Re: [petsc-dev] Kokkos build fail

2021-12-09 Thread Satish Balay via petsc-dev
My build is with xcode clang - not brew clang

Satish

On Thu, 9 Dec 2021, Mark Adams wrote:

> Mpich seems to give the same error.
> I use clang 13.0. I think I get that from homebrew.
> Should I try something like:
> brew install llvm@12
> 
> I see:
> 
> (conda_env) 07:50 adams/fix_mat_ex5k= ~/Codes/petsc2$ brew info llvm
> llvm: stable 13.0.0 (bottled), HEAD [keg-only]
> Next-gen compiler infrastructure
> https://llvm.org/
> Not installed
> From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/llvm.rb
> License: Apache-2.0 with LLVM-exception
> ==> Dependencies
> Build: cmake ✔, swig ✘
> Required: python@3.10 ✘
> ==> Options
> --HEAD
> Install HEAD version
> ==> Caveats
> To use the bundled libc++ please add the following LDFLAGS:
>   LDFLAGS="-L/usr/local/opt/llvm/lib -Wl,-rpath,/usr/local/opt/llvm/lib"
> 
> llvm is keg-only, which means it was not symlinked into /usr/local,
> because macOS already provides this software and installing another version
> in
> parallel can cause all kinds of trouble.
> 
> ==> Analytics
> install: 32,204 (30 days), 98,525 (90 days), 299,560 (365 days)
> install-on-request: 17,978 (30 days), 62,171 (90 days), 212,999 (365 days)
> build-error: 2,696 (30 days)
> 
> 
> On Wed, Dec 8, 2021 at 1:30 PM Satish Balay  wrote:
> 
> > This build goes through fine for me. [with petsc/main]
> >
> > xpro:petsc balay$ sw_vers
> > ProductName:Mac OS X
> > ProductVersion: 10.15.7
> > BuildVersion:   19H1519
> > xpro:petsc balay$  clang --version
> > Apple clang version 12.0.0 (clang-1200.0.32.2)
> > 
> > xpro:petsc balay$ ./configure --download-mpich --with-fc=0 COPTFLAGS="-g
> > -O" CXXOPTFLAGS="-g -O" --with-fortran-bindings=0 --download-kokkos=1
> > --download-kokkos-kernels=1 --with-kokkos-kernels-tpl=0  --with-zlib=1
> > --with-x=0
> >
> > Satish
> >
> >
> > On Wed, 8 Dec 2021, Jacob Faibussowitsch wrote:
> >
> > > > And your algorithm looks idempotent to me
> > >
> > > Believe me, I was sufficiently shocked when everything magically started
> > working the 3rd time around :)
> > >
> > > Best regards,
> > >
> > > Jacob Faibussowitsch
> > > (Jacob Fai - booss - oh - vitch)
> > >
> > > > On Dec 8, 2021, at 09:29, Mark Adams  wrote:
> > > >
> > > > Thanks,
> > > > And your algorithm looks idempotent to me
> > > >
> > > >
> > > > On Wed, Dec 8, 2021 at 9:13 AM Jacob Faibussowitsch <
> > jacob@gmail.com > wrote:
> > > > So I had similar issues back when I originally wrote the clang linter
> > — on Big Sur. The TL;DR for me was that Catalina originally shipped with
> > broken cmath headers, something future updates wouldn’t necessarily fix.
> > The only way to fix it was to:
> > > >
> > > > 1. Reinstall CLT
> > > > 2. Reinstall/install Xcode
> > > > 3. Repeat the above until it was fixed
> > > >
> > > > Now you may have an unrelated issue, but my error messages (e.g. about
> > missing “signbit”, “std::less_than”, etc in global namespace) were very
> > very similar. See discussions here:
> > > >
> > > > 1. https://gitlab.com/petsc/petsc/-/merge_requests/3773 <
> > https://gitlab.com/petsc/petsc/-/merge_requests/3773>
> > > > 2.
> > https://stackoverflow.com/questions/58628377/catalina-c-using-cmath-headers-yield-error-no-member-named-signbit-in-th
> > <
> > https://stackoverflow.com/questions/58628377/catalina-c-using-cmath-headers-yield-error-no-member-named-signbit-in-th
> > >
> > > > 3.
> > https://stackoverflow.com/questions/58313047/cannot-compile-r-packages-with-c-code-after-updating-to-macos-catalina
> > <
> > https://stackoverflow.com/questions/58313047/cannot-compile-r-packages-with-c-code-after-updating-to-macos-catalina
> > >
> > > >
> > > > If these help then you’re lucky I never clean out my “misc” bookmarks
> > folder :)
> > > >
> > > > Best regards,
> > > >
> > > > Jacob Faibussowitsch
> > > > (Jacob Fai - booss - oh - vitch)
> > > >
> > > >> On Dec 8, 2021, at 09:06, Mark Adams  > mfad...@lbl.gov>> wrote:
> > > >>
> > > >> Monterey.
> > > >> And my serial, optimized build works but it seems to use the same
> > compiler.
> > > >> I am testing the parallel build again with debug turned off.
> > > >>
> > > >> On Wed, Dec 8, 2021 at 9:04 AM Jacob Faibussowitsch <
> > jacob@gmail.com > wrote:
> > > >> You aren’t by chance on Catalina are you?
> > > >>
> > > >> Best regards,
> > > >>
> > > >> Jacob Faibussowitsch
> > > >> (Jacob Fai - booss - oh - vitch)
> > > >>
> > > >>> On Dec 8, 2021, at 08:49, Mark Adams  > mfad...@lbl.gov>> wrote:
> > > >>>
> > > >>> I am failing on OSX with openmpi. Kokkos is failing to build.
> > > >>> I seem to be using:
> > > >>>
> > > >>> (conda_env) 08:46 1 adams/fix_mat_ex5k *= ~/Codes/petsc2$
> > /usr/local/Cellar/open-mpi/4.1.1_2/bin/mpicxx --version
> > > >>> Apple clang version 13.0.0 (clang-1300.0.29.3)
> > > >>> Target: x86_64-apple-darwin21.1.0
> > > >>> Thread model: posix
> > > >>> InstalledDir:
> > 

Re: [petsc-dev] Kokkos build fail

2021-12-08 Thread Satish Balay via petsc-dev
This build goes through fine for me. [with petsc/main]

xpro:petsc balay$ sw_vers 
ProductName:Mac OS X
ProductVersion: 10.15.7
BuildVersion:   19H1519
xpro:petsc balay$  clang --version
Apple clang version 12.0.0 (clang-1200.0.32.2)

xpro:petsc balay$ ./configure --download-mpich --with-fc=0 COPTFLAGS="-g -O" 
CXXOPTFLAGS="-g -O" --with-fortran-bindings=0 --download-kokkos=1 
--download-kokkos-kernels=1 --with-kokkos-kernels-tpl=0  --with-zlib=1  
--with-x=0

Satish


On Wed, 8 Dec 2021, Jacob Faibussowitsch wrote:

> > And your algorithm looks idempotent to me
> 
> Believe me, I was sufficiently shocked when everything magically started 
> working the 3rd time around :)
> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> 
> > On Dec 8, 2021, at 09:29, Mark Adams  wrote:
> > 
> > Thanks,
> > And your algorithm looks idempotent to me
> > 
> > 
> > On Wed, Dec 8, 2021 at 9:13 AM Jacob Faibussowitsch  > > wrote:
> > So I had similar issues back when I originally wrote the clang linter — on 
> > Big Sur. The TL;DR for me was that Catalina originally shipped with broken 
> > cmath headers, something future updates wouldn’t necessarily fix. The only 
> > way to fix it was to:
> > 
> > 1. Reinstall CLT
> > 2. Reinstall/install Xcode
> > 3. Repeat the above until it was fixed
> > 
> > Now you may have an unrelated issue, but my error messages (e.g. about 
> > missing “signbit”, “std::less_than”, etc in global namespace) were very 
> > very similar. See discussions here:
> > 
> > 1. https://gitlab.com/petsc/petsc/-/merge_requests/3773 
> > 
> > 2. 
> > https://stackoverflow.com/questions/58628377/catalina-c-using-cmath-headers-yield-error-no-member-named-signbit-in-th
> >  
> > 
> > 3. 
> > https://stackoverflow.com/questions/58313047/cannot-compile-r-packages-with-c-code-after-updating-to-macos-catalina
> >  
> > 
> > 
> > If these help then you’re lucky I never clean out my “misc” bookmarks 
> > folder :)
> > 
> > Best regards,
> > 
> > Jacob Faibussowitsch
> > (Jacob Fai - booss - oh - vitch)
> > 
> >> On Dec 8, 2021, at 09:06, Mark Adams  >> > wrote:
> >> 
> >> Monterey.
> >> And my serial, optimized build works but it seems to use the same compiler.
> >> I am testing the parallel build again with debug turned off.
> >> 
> >> On Wed, Dec 8, 2021 at 9:04 AM Jacob Faibussowitsch  >> > wrote:
> >> You aren’t by chance on Catalina are you?
> >> 
> >> Best regards,
> >> 
> >> Jacob Faibussowitsch
> >> (Jacob Fai - booss - oh - vitch)
> >> 
> >>> On Dec 8, 2021, at 08:49, Mark Adams  >>> > wrote:
> >>> 
> >>> I am failing on OSX with openmpi. Kokkos is failing to build.
> >>> I seem to be using:
> >>> 
> >>> (conda_env) 08:46 1 adams/fix_mat_ex5k *= ~/Codes/petsc2$ 
> >>> /usr/local/Cellar/open-mpi/4.1.1_2/bin/mpicxx --version
> >>> Apple clang version 13.0.0 (clang-1300.0.29.3)
> >>> Target: x86_64-apple-darwin21.1.0
> >>> Thread model: posix
> >>> InstalledDir: 
> >>> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
> >>> 
> >>> Any ideas?
> >>> Thanks,
> >>> Mark
> >>> 
> >> 
> > 
> 
> 


Re: [petsc-dev] PTScotch problem on Mac

2021-11-23 Thread Satish Balay via petsc-dev
perhaps "dtruss -f" ?

https://stackoverflow.com/questions/1925978/equivalent-of-strace-feopen-command-on-mac-os-x

balay@ypro petsc % dtruss -f ./configure
dtrace: system integrity protection is on, some features will not be available
dtrace: failed to initialize dtrace: DTrace requires additional privileges

Oh well..

Satish

On Tue, 23 Nov 2021, Barry Smith wrote:

> 
> >> Hmm, I cannot figure out how to do that. Developer tools on Mac are 
> >> embarrassing.
> 
> I think you are probably looking in the wrong place. The GUI based Xcode 
> Instruments tools likely have this type of capability but it may not be 
> accessible from the command line.
> 
> 
> > On Nov 23, 2021, at 3:02 PM, Matthew Knepley  wrote:
> > 
> > On Tue, Nov 23, 2021 at 2:05 PM Satish Balay  > > wrote:
> > On Tue, 23 Nov 2021, Matthew Knepley wrote:
> > 
> > > On Tue, Nov 23, 2021 at 12:56 PM Matthew Knepley  > > > wrote:
> > > 
> > > > On Tue, Nov 23, 2021 at 12:29 PM Satish Balay  > > > > wrote:
> > > >
> > > >> The primary difference I can spot [as you say] is the older xcode you
> > > >> have. Eventhough it says the same version of flex - perhaps its buggy?
> > > >>
> > > >> Apple clang version 11.0.3 (clang-1103.0.32.59)
> > > >> vs
> > > >> Apple clang version 12.0.0 (clang-1200.0.32.2)
> > > >>
> > > >>
> > > >> >
> > > >> PATH=/PETSc3/cig/bin:/PETSc3/petsc/petsc-pylith/arch-pylith-debug/bin:/PETSc3/petsc/apple/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/texbin:/opt/X11/bin:/usr/local/git/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/opt/local/bin:/opt/local/sbin:/usr/X11/bin:/usr/local/texlive/2019/bin/x86_64-darwin:/usr/local/cuda/bin:/usr/local/gmt/bin:/usr/local/bin:/usr/X11/bin:/usr/local/texlive/2019/bin/x86_64-darwin:/usr/local/cuda/bin:/usr/local/gmt/bin
> > > >>
> > > >> BTW: Can you try a build with the following and see if it makes a
> > > >> difference?
> > > >>
> > > >> PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
> > > >> ./configure PETSC_ARCH=arch-test --with-mpi-dir=/PETSc3/petsc/apple
> > > >> --download-c2html --download-ptscotch
> > > >>
> > > >
> > > > Damn damn damn damn. Now I have to bisect the PATH to see how in the 
> > > > world
> > > > that can make a difference.
> > > >
> > > 
> > > Okay, the configure succeeds by taking out /opt/local/bin:/opt/local/sbin,
> > > but I cannot figure out why this would be the case?
> > 
> > 
> > If you can do 'strace --follow-forks' or equivalent on Mac - you might
> > be able to see what gets used from /opt/local/bin/
> > 
> > Hmm, I cannot figure out how to do that. Developer tools on Mac are 
> > embarrassing.
> >  
> > Also you might be better off using brew instead of what you currently
> > have.. [likely you don't need most of the binaries below. 'brew
> > leaves' gives a nice way to keep track of whats really needed]
> > 
> > I do not use MacPorts. I edited my .profile years ago and I missed that 
> > when abandoning it.
> >  
> > Or a brute force bisection by moving binaries out of (and back into) this 
> > location.
> > 
> > That sounds like something for "grad student time"
> > 
> >Thanks,
> > 
> >  Matt
> >  
> > Satish
> > 
> > > 
> > > knepley/feature-plex-multiple-hybrid *$:/PETSc3/petsc/petsc-pylith$ ls
> > > /opt/local/sbin/
> > > knepley/feature-plex-multiple-hybrid *$:/PETSc3/petsc/petsc-pylith$ ls
> > > /opt/local/bin/
> > > a2p envsubstlibnetcfg-5.12
> > >  perlivp-5.8 prove
> > > a2p-5.12find2perl   libnetcfg-5.8
> > > perlthanks  prove-5.12
> > > a2p-5.8 find2perl-5.12  msgattrib
> > > perlthanks-5.12 prove-5.8
> > > autoconf263 find2perl-5.8   msgcat
> > >  perlthanks-5.8  psed
> > > 
> > > autoheader263   gettext msgcmp
> > >  piconv  psed-5.12
> > > autom4te263 gettext.sh  msgcomm
> > > piconv-5.12 psed-5.8
> > > autopoint   gettextize  msgconv
> > > piconv-5.8  pstruct
> > > autoreconf263   ghc msgen
> > > pl2pm   pstruct-5.12
> > > autoscan263 ghc-6.10.4  msgexec
> > > pl2pm-5.12  pstruct-5.8
> > > autoupdate263   ghc-pkg msgfilter
> > > pl2pm-5.8   ptar-5.12
> > > c2phghc-pkg-6.10.4  msgfmt
> > >  pod2htmlptardiff-5.12
> > > c2ph-5.12   ghcimsggrep
> > > pod2html-5.12   recode-sr-latin
> > > c2ph-5.8ghci-6.10.4 msginit
> > > pod2html-5.8reset
> > > c_rehashgm4 msgmerge
> > >  pod2latex   

Re: [petsc-dev] PTScotch problem on Mac

2021-11-23 Thread Satish Balay via petsc-dev
On Tue, 23 Nov 2021, Matthew Knepley wrote:

> On Tue, Nov 23, 2021 at 12:56 PM Matthew Knepley  wrote:
> 
> > On Tue, Nov 23, 2021 at 12:29 PM Satish Balay  wrote:
> >
> >> The primary difference I can spot [as you say] is the older xcode you
> >> have. Eventhough it says the same version of flex - perhaps its buggy?
> >>
> >> Apple clang version 11.0.3 (clang-1103.0.32.59)
> >> vs
> >> Apple clang version 12.0.0 (clang-1200.0.32.2)
> >>
> >>
> >> >
> >> PATH=/PETSc3/cig/bin:/PETSc3/petsc/petsc-pylith/arch-pylith-debug/bin:/PETSc3/petsc/apple/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/texbin:/opt/X11/bin:/usr/local/git/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/opt/local/bin:/opt/local/sbin:/usr/X11/bin:/usr/local/texlive/2019/bin/x86_64-darwin:/usr/local/cuda/bin:/usr/local/gmt/bin:/usr/local/bin:/usr/X11/bin:/usr/local/texlive/2019/bin/x86_64-darwin:/usr/local/cuda/bin:/usr/local/gmt/bin
> >>
> >> BTW: Can you try a build with the following and see if it makes a
> >> difference?
> >>
> >> PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
> >> ./configure PETSC_ARCH=arch-test --with-mpi-dir=/PETSc3/petsc/apple
> >> --download-c2html --download-ptscotch
> >>
> >
> > Damn damn damn damn. Now I have to bisect the PATH to see how in the world
> > that can make a difference.
> >
> 
> Okay, the configure succeeds by taking out /opt/local/bin:/opt/local/sbin,
> but I cannot figure out why this would be the case?


If you can do 'strace --follow-forks' or equivalent on Mac - you might
be able to see what gets used from /opt/local/bin/

Also you might be better off using brew instead of what you currently
have.. [likely you don't need most of the binaries below. 'brew
leaves' gives a nice way to keep track of whats really needed]

Or a brute force bisection by moving binaries out of (and back into) this 
location.

Satish

> 
> knepley/feature-plex-multiple-hybrid *$:/PETSc3/petsc/petsc-pylith$ ls
> /opt/local/sbin/
> knepley/feature-plex-multiple-hybrid *$:/PETSc3/petsc/petsc-pylith$ ls
> /opt/local/bin/
> a2p envsubstlibnetcfg-5.12
>  perlivp-5.8 prove
> a2p-5.12find2perl   libnetcfg-5.8
> perlthanks  prove-5.12
> a2p-5.8 find2perl-5.12  msgattrib
> perlthanks-5.12 prove-5.8
> autoconf263 find2perl-5.8   msgcat
>  perlthanks-5.8  psed
> 
> autoheader263   gettext msgcmp
>  piconv  psed-5.12
> autom4te263 gettext.sh  msgcomm
> piconv-5.12 psed-5.8
> autopoint   gettextize  msgconv
> piconv-5.8  pstruct
> autoreconf263   ghc msgen
> pl2pm   pstruct-5.12
> autoscan263 ghc-6.10.4  msgexec
> pl2pm-5.12  pstruct-5.8
> autoupdate263   ghc-pkg msgfilter
> pl2pm-5.8   ptar-5.12
> c2phghc-pkg-6.10.4  msgfmt
>  pod2htmlptardiff-5.12
> c2ph-5.12   ghcimsggrep
> pod2html-5.12   recode-sr-latin
> c2ph-5.8ghci-6.10.4 msginit
> pod2html-5.8reset
> c_rehashgm4 msgmerge
>  pod2latex   runghc
> captoinfo   gperf   msgunfmt
>  pod2latex-5.12  runhaskell
> clear   h2phmsguniq
> pod2latex-5.8   s2p
> config_data-5.12h2ph-5.12   ncurses5-config
> pod2man s2p-5.12
> corelist-5.12   h2ph-5.8ncursesw5-config
>  pod2man-5.12s2p-5.8
> corelist-5.8h2xsngettext
>  pod2man-5.8 shasum-5.12
> cpanh2xs-5.12   openssl
> pod2textsplain
> cpan-5.12   h2xs-5.8perl
>  pod2text-5.12   splain-5.12
> cpan-5.8haddock perl5
> pod2text-5.8splain-5.8
> cpan2dist   hasktagsperl5.12
>  pod2usage   tabs
> cpan2dist-5.12  help2manperl5.12.3
>  pod2usage-5.12  tic
> cpanp   hp2ps   perl5.8
> pod2usage-5.8   toe
> cpanp-5.12  hpc perl5.8.9
> podchecker  tput
> cpanp-run-perl  hsc2hs  perlbug
> podchecker-5.12 tset
> cpanp-run-perl-5.12 iconv   perlbug-5.12
>  podchecker-5.8  wget
> daemondoidn perlbug-5.8
> podselect   xgettext
> dprofpp ifnames263  perlcc-5.8
>  podselect-5.12  xmlwf
> 

Re: [petsc-dev] PTScotch problem on Mac

2021-11-23 Thread Satish Balay via petsc-dev
The primary difference I can spot [as you say] is the older xcode you have. 
Eventhough it says the same version of flex - perhaps its buggy?

Apple clang version 11.0.3 (clang-1103.0.32.59)
vs
Apple clang version 12.0.0 (clang-1200.0.32.2)


> PATH=/PETSc3/cig/bin:/PETSc3/petsc/petsc-pylith/arch-pylith-debug/bin:/PETSc3/petsc/apple/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/texbin:/opt/X11/bin:/usr/local/git/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/opt/local/bin:/opt/local/sbin:/usr/X11/bin:/usr/local/texlive/2019/bin/x86_64-darwin:/usr/local/cuda/bin:/usr/local/gmt/bin:/usr/local/bin:/usr/X11/bin:/usr/local/texlive/2019/bin/x86_64-darwin:/usr/local/cuda/bin:/usr/local/gmt/bin

BTW: Can you try a build with the following and see if it makes a difference?

PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin ./configure 
PETSC_ARCH=arch-test --with-mpi-dir=/PETSc3/petsc/apple --download-c2html 
--download-ptscotch

Satish


On Tue, 23 Nov 2021, Matthew Knepley wrote:

> Here it is.
> 
>   Matt
> 
> On Tue, Nov 23, 2021 at 11:44 AM Satish Balay  wrote:
> 
> > On Tue, 23 Nov 2021, Matthew Knepley wrote:
> >
> > > On Tue, Nov 23, 2021 at 11:28 AM Satish Balay  wrote:
> > >
> > > > Well we don't have this issue on our (macos) CI boxes where both c2html
> > > > and scotch build and run daily [in CI]
> > > >
> > > > what 'flex' are you using? And why does it behave differently on your
> > box?
> > > >
> > >
> > > main *$:/PETSc3/petsc/petsc-dev$ which flex
> > > /usr/bin/flex
> > > main *$:/PETSc3/petsc/petsc-dev$ flex --version
> > > flex 2.5.35 Apple(flex-32)
> > >
> > >
> > > > And what errors do you get?
> > > >
> > >
> > > Without the extra input define in PTScotch.py, I get the yylval symbol
> > > undefined and the lexer symbol. When I give
> > > that define as input, only the lexer symbol is undefined.
> >
> > BTW: Can you send the log?
> >
> > Satish
> >
> > >
> > >
> > > > Perhaps CI is using older xcode (command line tools) - and you are
> > using
> > > > newer? Or something else?
> > > >
> > >
> > > Probably the other way around. I am on Catalina 10.15.6
> > >
> > >Matt
> > >
> > >
> > > > Barry - do you have this issue on your machine?
> > > >
> > > > balay@ypro ~ % which flex
> > > > /usr/bin/flex
> > > > balay@ypro ~ % /usr/bin/flex --version
> > > > flex 2.5.35 Apple(flex-32)
> > > > balay@ypro petsc % clang -v
> > > > Apple clang version 12.0.0 (clang-1200.0.32.2)
> > > > Target: x86_64-apple-darwin19.6.0
> > > > Thread model: posix
> > > > InstalledDir: /Library/Developer/CommandLineTools/usr/bin
> > > > balay@ypro ~ % balay@ypro petsc % ./configure
> > > > --with-mpi-dir=$HOME/mpich-3.4.2 --download-c2html --download-ptscotch
> > > >
> > > > 
> > > >
> > > >
> > > > Satish
> > > >
> > > > On Tue, 23 Nov 2021, Matthew Knepley wrote:
> > > >
> > > > > This is the same flex problem as I had for c2html, but I was more
> > > > > determined tracking it down this time. The first problem is that we
> > were
> > > > > not renaming in the parser,
> > > > >
> > > > > main *$:/PETSc3/petsc/petsc-dev$ git diff
> > > > > diff --git a/config/BuildSystem/config/packages/PTScotch.py
> > > > > b/config/BuildSystem/config/packages/PTScotch.py
> > > > > index d1c277b6e9f..e046804c17f 100644
> > > > > --- a/config/BuildSystem/config/packages/PTScotch.py
> > > > > +++ b/config/BuildSystem/config/packages/PTScotch.py
> > > > > @@ -70,7 +70,7 @@ class Configure(config.package.Package):
> > > > >  if self.libraries.add('-lrt','timer_create'): ldflags += ' -lrt'
> > > > >  self.cflags = self.cflags + ' -DCOMMON_RANDOM_FIXED_SEED'
> > > > >  # do not use -DSCOTCH_PTHREAD because requires MPI built for
> > > > threads.
> > > > > -self.cflags = self.cflags + ' -DSCOTCH_RENAME
> > > > > -Drestrict="'+self.compilers.cRestrict+'"'
> > > > > +self.cflags = self.cflags + ' -DSCOTCH_RENAME
> > -DSCOTCH_RENAME_PARSER
> > > > > -Drestrict="'+self.compilers.cRestrict+'"'
> > > > >  # this is needed on the Mac, because common2.c includes common.h
> > > > which
> > > > > DOES NOT include mpi.h because
> > > > >  # SCOTCH_PTSCOTCH is NOT defined above Mac does not know what
> > > > > clock_gettime() is!
> > > > >  if self.setCompilers.isDarwin(self.log):
> > > > >
> > > > > Second, they were not treating this case completely correctly:
> > > > >
> > > > >
> > > >
> > (93454e8...):/PETSc3/petsc/petsc-dev/arch-master-debug/externalpackages/git.ptscotch/src/libscotch$
> > > > > git diff HEAD~1
> > > > > diff --git a/src/libscotch/parser_yy.h b/src/libscotch/parser_yy.h
> > > > > index 931315d..95b8160 100644
> > > > > --- a/src/libscotch/parser_yy.h
> > > > > +++ b/src/libscotch/parser_yy.h
> > > > > @@ -62,6 +62,9 @@
> > > > >
> > > > >  #if ((defined SCOTCH_RENAME_PARSER) || (defined yylex)) /* If prefix
> > > > > renaming*/
> > > > >  #define scotchyyparse   

Re: [petsc-dev] PTScotch problem on Mac

2021-11-23 Thread Satish Balay via petsc-dev
On Tue, 23 Nov 2021, Matthew Knepley wrote:

> On Tue, Nov 23, 2021 at 11:28 AM Satish Balay  wrote:
> 
> > Well we don't have this issue on our (macos) CI boxes where both c2html
> > and scotch build and run daily [in CI]
> >
> > what 'flex' are you using? And why does it behave differently on your box?
> >
> 
> main *$:/PETSc3/petsc/petsc-dev$ which flex
> /usr/bin/flex
> main *$:/PETSc3/petsc/petsc-dev$ flex --version
> flex 2.5.35 Apple(flex-32)
> 
> 
> > And what errors do you get?
> >
> 
> Without the extra input define in PTScotch.py, I get the yylval symbol
> undefined and the lexer symbol. When I give
> that define as input, only the lexer symbol is undefined.

BTW: Can you send the log?

Satish

> 
> 
> > Perhaps CI is using older xcode (command line tools) - and you are using
> > newer? Or something else?
> >
> 
> Probably the other way around. I am on Catalina 10.15.6
> 
>Matt
> 
> 
> > Barry - do you have this issue on your machine?
> >
> > balay@ypro ~ % which flex
> > /usr/bin/flex
> > balay@ypro ~ % /usr/bin/flex --version
> > flex 2.5.35 Apple(flex-32)
> > balay@ypro petsc % clang -v
> > Apple clang version 12.0.0 (clang-1200.0.32.2)
> > Target: x86_64-apple-darwin19.6.0
> > Thread model: posix
> > InstalledDir: /Library/Developer/CommandLineTools/usr/bin
> > balay@ypro ~ % balay@ypro petsc % ./configure
> > --with-mpi-dir=$HOME/mpich-3.4.2 --download-c2html --download-ptscotch
> >
> > 
> >
> >
> > Satish
> >
> > On Tue, 23 Nov 2021, Matthew Knepley wrote:
> >
> > > This is the same flex problem as I had for c2html, but I was more
> > > determined tracking it down this time. The first problem is that we were
> > > not renaming in the parser,
> > >
> > > main *$:/PETSc3/petsc/petsc-dev$ git diff
> > > diff --git a/config/BuildSystem/config/packages/PTScotch.py
> > > b/config/BuildSystem/config/packages/PTScotch.py
> > > index d1c277b6e9f..e046804c17f 100644
> > > --- a/config/BuildSystem/config/packages/PTScotch.py
> > > +++ b/config/BuildSystem/config/packages/PTScotch.py
> > > @@ -70,7 +70,7 @@ class Configure(config.package.Package):
> > >  if self.libraries.add('-lrt','timer_create'): ldflags += ' -lrt'
> > >  self.cflags = self.cflags + ' -DCOMMON_RANDOM_FIXED_SEED'
> > >  # do not use -DSCOTCH_PTHREAD because requires MPI built for
> > threads.
> > > -self.cflags = self.cflags + ' -DSCOTCH_RENAME
> > > -Drestrict="'+self.compilers.cRestrict+'"'
> > > +self.cflags = self.cflags + ' -DSCOTCH_RENAME -DSCOTCH_RENAME_PARSER
> > > -Drestrict="'+self.compilers.cRestrict+'"'
> > >  # this is needed on the Mac, because common2.c includes common.h
> > which
> > > DOES NOT include mpi.h because
> > >  # SCOTCH_PTSCOTCH is NOT defined above Mac does not know what
> > > clock_gettime() is!
> > >  if self.setCompilers.isDarwin(self.log):
> > >
> > > Second, they were not treating this case completely correctly:
> > >
> > >
> > (93454e8...):/PETSc3/petsc/petsc-dev/arch-master-debug/externalpackages/git.ptscotch/src/libscotch$
> > > git diff HEAD~1
> > > diff --git a/src/libscotch/parser_yy.h b/src/libscotch/parser_yy.h
> > > index 931315d..95b8160 100644
> > > --- a/src/libscotch/parser_yy.h
> > > +++ b/src/libscotch/parser_yy.h
> > > @@ -62,6 +62,9 @@
> > >
> > >  #if ((defined SCOTCH_RENAME_PARSER) || (defined yylex)) /* If prefix
> > > renaming*/
> > >  #define scotchyyparse   stratParserParse2 /* Parser function
> > > name*/
> > > +#if !defined(yylex)
> > > +#define yylex   scotchyylex
> > > +#endif
> > >  #ifndef yylval
> > >  #define yylval  SCOTCH_NAME_MACRO3 (scotchyy,
> > > SCOTCH_NAME_SUFFIXC, lval) /* It should be Yacc/Bison's job to redefine
> > it!
> > >  */
> > >  #endif /* yylval  */
> > >
> > > How should we go about getting this fix in? Do you need to have our own
> > > branch of PTScotch?
> > >
> > >   Thanks,
> > >
> > >  Matt
> > >
> > >
> >
> 
> 
> 



Re: [petsc-dev] PTScotch problem on Mac

2021-11-23 Thread Satish Balay via petsc-dev


On Tue, 23 Nov 2021, Matthew Knepley wrote:

> On Tue, Nov 23, 2021 at 11:28 AM Satish Balay  wrote:
> 
> > Well we don't have this issue on our (macos) CI boxes where both c2html
> > and scotch build and run daily [in CI]
> >
> > what 'flex' are you using? And why does it behave differently on your box?
> >
> 
> main *$:/PETSc3/petsc/petsc-dev$ which flex
> /usr/bin/flex
> main *$:/PETSc3/petsc/petsc-dev$ flex --version
> flex 2.5.35 Apple(flex-32)
> 
> 
> > And what errors do you get?
> >
> 
> Without the extra input define in PTScotch.py, I get the yylval symbol
> undefined and the lexer symbol. When I give
> that define as input, only the lexer symbol is undefined.
> 
> 
> > Perhaps CI is using older xcode (command line tools) - and you are using
> > newer? Or something else?
> >
> 
> Probably the other way around. I am on Catalina 10.15.6

balay@ypro petsc % sw_vers
ProductName:Mac OS X
ProductVersion: 10.15.7
BuildVersion:   19H1519

Well its the same OS and same flex version - so I don't know why things break 
only for you [as there have been no other bug reports on this]

Satish

> 
>Matt
> 
> 
> > Barry - do you have this issue on your machine?
> >
> > balay@ypro ~ % which flex
> > /usr/bin/flex
> > balay@ypro ~ % /usr/bin/flex --version
> > flex 2.5.35 Apple(flex-32)
> > balay@ypro petsc % clang -v
> > Apple clang version 12.0.0 (clang-1200.0.32.2)
> > Target: x86_64-apple-darwin19.6.0
> > Thread model: posix
> > InstalledDir: /Library/Developer/CommandLineTools/usr/bin
> > balay@ypro ~ % balay@ypro petsc % ./configure
> > --with-mpi-dir=$HOME/mpich-3.4.2 --download-c2html --download-ptscotch
> >
> > 
> >
> >
> > Satish
> >
> > On Tue, 23 Nov 2021, Matthew Knepley wrote:
> >
> > > This is the same flex problem as I had for c2html, but I was more
> > > determined tracking it down this time. The first problem is that we were
> > > not renaming in the parser,
> > >
> > > main *$:/PETSc3/petsc/petsc-dev$ git diff
> > > diff --git a/config/BuildSystem/config/packages/PTScotch.py
> > > b/config/BuildSystem/config/packages/PTScotch.py
> > > index d1c277b6e9f..e046804c17f 100644
> > > --- a/config/BuildSystem/config/packages/PTScotch.py
> > > +++ b/config/BuildSystem/config/packages/PTScotch.py
> > > @@ -70,7 +70,7 @@ class Configure(config.package.Package):
> > >  if self.libraries.add('-lrt','timer_create'): ldflags += ' -lrt'
> > >  self.cflags = self.cflags + ' -DCOMMON_RANDOM_FIXED_SEED'
> > >  # do not use -DSCOTCH_PTHREAD because requires MPI built for
> > threads.
> > > -self.cflags = self.cflags + ' -DSCOTCH_RENAME
> > > -Drestrict="'+self.compilers.cRestrict+'"'
> > > +self.cflags = self.cflags + ' -DSCOTCH_RENAME -DSCOTCH_RENAME_PARSER
> > > -Drestrict="'+self.compilers.cRestrict+'"'
> > >  # this is needed on the Mac, because common2.c includes common.h
> > which
> > > DOES NOT include mpi.h because
> > >  # SCOTCH_PTSCOTCH is NOT defined above Mac does not know what
> > > clock_gettime() is!
> > >  if self.setCompilers.isDarwin(self.log):
> > >
> > > Second, they were not treating this case completely correctly:
> > >
> > >
> > (93454e8...):/PETSc3/petsc/petsc-dev/arch-master-debug/externalpackages/git.ptscotch/src/libscotch$
> > > git diff HEAD~1
> > > diff --git a/src/libscotch/parser_yy.h b/src/libscotch/parser_yy.h
> > > index 931315d..95b8160 100644
> > > --- a/src/libscotch/parser_yy.h
> > > +++ b/src/libscotch/parser_yy.h
> > > @@ -62,6 +62,9 @@
> > >
> > >  #if ((defined SCOTCH_RENAME_PARSER) || (defined yylex)) /* If prefix
> > > renaming*/
> > >  #define scotchyyparse   stratParserParse2 /* Parser function
> > > name*/
> > > +#if !defined(yylex)
> > > +#define yylex   scotchyylex
> > > +#endif
> > >  #ifndef yylval
> > >  #define yylval  SCOTCH_NAME_MACRO3 (scotchyy,
> > > SCOTCH_NAME_SUFFIXC, lval) /* It should be Yacc/Bison's job to redefine
> > it!
> > >  */
> > >  #endif /* yylval  */
> > >
> > > How should we go about getting this fix in? Do you need to have our own
> > > branch of PTScotch?
> > >
> > >   Thanks,
> > >
> > >  Matt
> > >
> > >
> >
> 
> 
> 



Re: [petsc-dev] I am getting this error ...

2021-11-08 Thread Satish Balay via petsc-dev
Better yet.. [from Junchao]

export CRAY_ACCEL_TARGET=host


Satish

On Fri, 5 Nov 2021, Mark Adams wrote:

> Bingo!
> 
> On Fri, Nov 5, 2021 at 12:54 PM Satish Balay  wrote:
> 
> > Yeah remove [C,CPP,CXX,CXXPP,F] FLAGS
> >
> > CPP defaults to '$CC -E' - so with "--with-cc='cc -mp=gpu'" - it should
> > use "cc -mp=gpu -E"
> >
> > Satish
> >
> > On Fri, 5 Nov 2021, Mark Adams wrote:
> >
> > > How about CPP flags?
> > >
> > > On Fri, Nov 5, 2021 at 12:29 PM Satish Balay  wrote:
> > >
> > > > I guess another way to deal with this is: not use CFLAGS etc..
> > > >
> > > >
> > > > --with-cc='cc -mp=gpu' --with-cxx='CC -mp=gpu' --with-fc='ftn -mp=gpu'
> > > >
> > > > Satish
> > > >
> > > >
> > > > On Fri, 5 Nov 2021, Mark Adams wrote:
> > > >
> > > > > Yes, thanks.
> > > > > I emailed the NERSc person and told him where we are and that we
> > could
> > > > fix
> > > > > this manually, but I don't have this issue on Summit ...
> > > > > He did not understand for the longest time, not sure he does now,
> > that we
> > > > > do not add -gpu.
> > > > > And I gave him your reproducer.
> > > > > With any luck, with the reproducer, they can figure this out at some
> > > > point.
> > > > > MPI is not working well for my scaling studies and my app has also
> > given
> > > > up
> > > > > on nvhpc, so we are good.
> > > > >
> > > > >
> > > > > On Fri, Nov 5, 2021 at 9:49 AM Satish Balay 
> > wrote:
> > > > >
> > > > > > For now - you could manually edit petscvariables and remove
> > -mp=gpu
> > > > from
> > > > > > it.
> > > > > >
> > > > > > Its primarily required to make configure happy.
> > > > > >
> > > > > > Satish
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, 4 Nov 2021, Barry Smith wrote:
> > > > > >
> > > > > > >
> > > > > > >This comes from the persistent problem with PETSc's make
> > system
> > > > using
> > > > > > too many flags for compiling CUDA that have not been tested by
> > > > configure.
> > > > > > See below the -mp=gpu is provided probably from the CPPFLAGS or
> > > > > > CXXPPFLAGS(sp) that is improperly used by the PETSc makefiles to
> > > > compile
> > > > > > CUDA code!
> > > > > > >
> > > > > > >
> > > > > > > Using CUDA compile:
> > > > > > /global/common/software/nersc/cos1.3/cuda/11.3.0/bin/nvcc -o .o
> > > > > > -I/opt/cray/pe/mpich/8.1.10/ofi/nvidia/20.7/include
> > > > -I/opt/cray/pe/libsci/
> > > > > > 21.08.1.2/NVIDIA/20.7/x86_64/include
> > -I/opt/cray/pe/pmi/6.0.14/include
> > > > > > -I/opt/cray/pe/dsmml/0.2.2/dsmml//include
> > > > > > -I/opt/cray/xpmem/2.2.40-7.0.1.0_3.1__g1d7a24d.shasta/include  -g
> > > > > > -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10
> > > > > > -DLANDAU_MAX_Q=4 -Xcompiler -fPIC -std=c++17 -gencode
> > > > > > arch=compute_80,code=sm_80  -Wno-deprecated-gpu-targets  -c
> > > > > > --compiler-options=-I/global/homes/m/madams/petsc/include
> > > > > >
> > -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include
> > > > > > -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include -mp=gpu
> > > > > > -I/global/homes/m/madams/petsc/include
> > > > > >
> > -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include
> > > > > > -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > On Nov 4, 2021, at 7:31 PM, Mark Adams 
> > wrote:
> > > > > > > >
> > > > > > > > OK, configure is done.
> > > > > > > > Maybe I have too many -mp=gpu
> > > > > > > >
> > > > > > > >CUDAC
> > > > > >
> > > >
> > arch-perlmutter-opt-nvidia-cuda/obj/sys/classes/random/impls/curand/curand2.o
> > > > > > > > gcc: error: unrecognized command line option ‘-mp=gpu’; did you
> > > > mean
> > > > > > ‘-mpku’?
> > > > > > > >
> > > > > > > > On Thu, Nov 4, 2021 at 5:51 PM Barry Smith  > > >  > > > > > bsm...@petsc.dev>> wrote:
> > > > > > > >
> > > > > > > >   Need the same thing for the C++ preprocessor flag
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >> On Nov 4, 2021, at 5:44 PM, Mark Adams  >  > > > > > mfad...@lbl.gov>> wrote:
> > > > > > > >>
> > > > > > > >> It gets a lot further.
> > > > > > > >>
> > > > > > > >> On Thu, Nov 4, 2021 at 5:32 PM Mark Adams  > > >  > > > > > mfad...@lbl.gov>> wrote:
> > > > > > > >> OK, sorry I missed the CPPFLAGS. It is running now.
> > > > > > > >> Thanks,
> > > > > > > >>
> > > > > > > >> On Thu, Nov 4, 2021 at 4:43 PM Satish Balay <
> > ba...@mcs.anl.gov
> > > > > > > wrote:
> > > > > > > >> Multiple e-mail threads on the same issue (:
> > > > > > > >>
> > > > > > > >> As suggested in my earlier thread - add -mp=gpu to both
> > CPPFLAGS
> > > > and
> > > > > > CFLAGS [or LDFLAGS]
> > > > > > > >>
> > > > > > > >> Satish
> > > > > > > >>
> > > > > > > >> ---
> > > > > > > >> Executing: cc  -o
> > /tmp/petsc-Vvs8_T/config.setCompilers/conftest
> > > >  -g
> > > > > > -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4
> > > > > > /tmp/petsc-Vvs8_T/config.setCompilers/conftest.o
> > > > 

Re: [petsc-dev] I am getting this error ...

2021-11-05 Thread Satish Balay via petsc-dev
Yeah remove [C,CPP,CXX,CXXPP,F] FLAGS

CPP defaults to '$CC -E' - so with "--with-cc='cc -mp=gpu'" - it should use "cc 
-mp=gpu -E"

Satish

On Fri, 5 Nov 2021, Mark Adams wrote:

> How about CPP flags?
> 
> On Fri, Nov 5, 2021 at 12:29 PM Satish Balay  wrote:
> 
> > I guess another way to deal with this is: not use CFLAGS etc..
> >
> >
> > --with-cc='cc -mp=gpu' --with-cxx='CC -mp=gpu' --with-fc='ftn -mp=gpu'
> >
> > Satish
> >
> >
> > On Fri, 5 Nov 2021, Mark Adams wrote:
> >
> > > Yes, thanks.
> > > I emailed the NERSc person and told him where we are and that we could
> > fix
> > > this manually, but I don't have this issue on Summit ...
> > > He did not understand for the longest time, not sure he does now, that we
> > > do not add -gpu.
> > > And I gave him your reproducer.
> > > With any luck, with the reproducer, they can figure this out at some
> > point.
> > > MPI is not working well for my scaling studies and my app has also given
> > up
> > > on nvhpc, so we are good.
> > >
> > >
> > > On Fri, Nov 5, 2021 at 9:49 AM Satish Balay  wrote:
> > >
> > > > For now - you could manually edit petscvariables and remove  -mp=gpu
> > from
> > > > it.
> > > >
> > > > Its primarily required to make configure happy.
> > > >
> > > > Satish
> > > >
> > > >
> > > >
> > > > On Thu, 4 Nov 2021, Barry Smith wrote:
> > > >
> > > > >
> > > > >This comes from the persistent problem with PETSc's make system
> > using
> > > > too many flags for compiling CUDA that have not been tested by
> > configure.
> > > > See below the -mp=gpu is provided probably from the CPPFLAGS or
> > > > CXXPPFLAGS(sp) that is improperly used by the PETSc makefiles to
> > compile
> > > > CUDA code!
> > > > >
> > > > >
> > > > > Using CUDA compile:
> > > > /global/common/software/nersc/cos1.3/cuda/11.3.0/bin/nvcc -o .o
> > > > -I/opt/cray/pe/mpich/8.1.10/ofi/nvidia/20.7/include
> > -I/opt/cray/pe/libsci/
> > > > 21.08.1.2/NVIDIA/20.7/x86_64/include -I/opt/cray/pe/pmi/6.0.14/include
> > > > -I/opt/cray/pe/dsmml/0.2.2/dsmml//include
> > > > -I/opt/cray/xpmem/2.2.40-7.0.1.0_3.1__g1d7a24d.shasta/include  -g
> > > > -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10
> > > > -DLANDAU_MAX_Q=4 -Xcompiler -fPIC -std=c++17 -gencode
> > > > arch=compute_80,code=sm_80  -Wno-deprecated-gpu-targets  -c
> > > > --compiler-options=-I/global/homes/m/madams/petsc/include
> > > > -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include
> > > > -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include -mp=gpu
> > > > -I/global/homes/m/madams/petsc/include
> > > > -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include
> > > > -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include
> > > > >
> > > > >
> > > > >
> > > > > > On Nov 4, 2021, at 7:31 PM, Mark Adams  wrote:
> > > > > >
> > > > > > OK, configure is done.
> > > > > > Maybe I have too many -mp=gpu
> > > > > >
> > > > > >CUDAC
> > > >
> > arch-perlmutter-opt-nvidia-cuda/obj/sys/classes/random/impls/curand/curand2.o
> > > > > > gcc: error: unrecognized command line option ‘-mp=gpu’; did you
> > mean
> > > > ‘-mpku’?
> > > > > >
> > > > > > On Thu, Nov 4, 2021 at 5:51 PM Barry Smith  >  > > > bsm...@petsc.dev>> wrote:
> > > > > >
> > > > > >   Need the same thing for the C++ preprocessor flag
> > > > > >
> > > > > >
> > > > > >
> > > > > >> On Nov 4, 2021, at 5:44 PM, Mark Adams  > > > mfad...@lbl.gov>> wrote:
> > > > > >>
> > > > > >> It gets a lot further.
> > > > > >>
> > > > > >> On Thu, Nov 4, 2021 at 5:32 PM Mark Adams  >  > > > mfad...@lbl.gov>> wrote:
> > > > > >> OK, sorry I missed the CPPFLAGS. It is running now.
> > > > > >> Thanks,
> > > > > >>
> > > > > >> On Thu, Nov 4, 2021 at 4:43 PM Satish Balay  > > > > wrote:
> > > > > >> Multiple e-mail threads on the same issue (:
> > > > > >>
> > > > > >> As suggested in my earlier thread - add -mp=gpu to both CPPFLAGS
> > and
> > > > CFLAGS [or LDFLAGS]
> > > > > >>
> > > > > >> Satish
> > > > > >>
> > > > > >> ---
> > > > > >> Executing: cc  -o /tmp/petsc-Vvs8_T/config.setCompilers/conftest
> >  -g
> > > > -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4
> > > > /tmp/petsc-Vvs8_T/config.setCompilers/conftest.o
> > > > > >> Possible ERROR while running linker:
> > > > > >> stderr:
> > > > > >> nvc-Warning-The -gpu option has no effect unless a
> > language-specific
> > > > option to enable GPU code generation is used (e.g.: -acc, -mp=gpu,
> > -stdpar,
> > > > -cuda)
> > > > > >>
> > > > > >>
> > > > > >> On Thu, 4 Nov 2021, Mark Adams wrote:
> > > > > >>
> > > > > >> > It is CPPFLAGS. I seem to get the same behavior.
> > > > > >> >
> > > > > >> > FWIW, I did get this response from NERSc but I don't know how to
> > > > interpret
> > > > > >> > it.
> > > > > >> >
> > > > > >> > He seems to be saying that I don't need -mp=gpu for the device
> > > > compiler
> > > > > >> > (nvcc). He seems to think that I am adding -gpu.
> > > > > >> >
> > > > > >> > nvcc -- The 

Re: [petsc-dev] I am getting this error ...

2021-11-05 Thread Satish Balay via petsc-dev
I guess another way to deal with this is: not use CFLAGS etc..


--with-cc='cc -mp=gpu' --with-cxx='CC -mp=gpu' --with-fc='ftn -mp=gpu'

Satish


On Fri, 5 Nov 2021, Mark Adams wrote:

> Yes, thanks.
> I emailed the NERSc person and told him where we are and that we could fix
> this manually, but I don't have this issue on Summit ...
> He did not understand for the longest time, not sure he does now, that we
> do not add -gpu.
> And I gave him your reproducer.
> With any luck, with the reproducer, they can figure this out at some point.
> MPI is not working well for my scaling studies and my app has also given up
> on nvhpc, so we are good.
> 
> 
> On Fri, Nov 5, 2021 at 9:49 AM Satish Balay  wrote:
> 
> > For now - you could manually edit petscvariables and remove  -mp=gpu from
> > it.
> >
> > Its primarily required to make configure happy.
> >
> > Satish
> >
> >
> >
> > On Thu, 4 Nov 2021, Barry Smith wrote:
> >
> > >
> > >This comes from the persistent problem with PETSc's make system using
> > too many flags for compiling CUDA that have not been tested by configure.
> > See below the -mp=gpu is provided probably from the CPPFLAGS or
> > CXXPPFLAGS(sp) that is improperly used by the PETSc makefiles to compile
> > CUDA code!
> > >
> > >
> > > Using CUDA compile:
> > /global/common/software/nersc/cos1.3/cuda/11.3.0/bin/nvcc -o .o
> > -I/opt/cray/pe/mpich/8.1.10/ofi/nvidia/20.7/include -I/opt/cray/pe/libsci/
> > 21.08.1.2/NVIDIA/20.7/x86_64/include -I/opt/cray/pe/pmi/6.0.14/include
> > -I/opt/cray/pe/dsmml/0.2.2/dsmml//include
> > -I/opt/cray/xpmem/2.2.40-7.0.1.0_3.1__g1d7a24d.shasta/include  -g
> > -Xcompiler -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10
> > -DLANDAU_MAX_Q=4 -Xcompiler -fPIC -std=c++17 -gencode
> > arch=compute_80,code=sm_80  -Wno-deprecated-gpu-targets  -c
> > --compiler-options=-I/global/homes/m/madams/petsc/include
> > -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include
> > -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include -mp=gpu
> > -I/global/homes/m/madams/petsc/include
> > -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include
> > -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include
> > >
> > >
> > >
> > > > On Nov 4, 2021, at 7:31 PM, Mark Adams  wrote:
> > > >
> > > > OK, configure is done.
> > > > Maybe I have too many -mp=gpu
> > > >
> > > >CUDAC
> > arch-perlmutter-opt-nvidia-cuda/obj/sys/classes/random/impls/curand/curand2.o
> > > > gcc: error: unrecognized command line option ‘-mp=gpu’; did you mean
> > ‘-mpku’?
> > > >
> > > > On Thu, Nov 4, 2021 at 5:51 PM Barry Smith  > bsm...@petsc.dev>> wrote:
> > > >
> > > >   Need the same thing for the C++ preprocessor flag
> > > >
> > > >
> > > >
> > > >> On Nov 4, 2021, at 5:44 PM, Mark Adams  > mfad...@lbl.gov>> wrote:
> > > >>
> > > >> It gets a lot further.
> > > >>
> > > >> On Thu, Nov 4, 2021 at 5:32 PM Mark Adams  > mfad...@lbl.gov>> wrote:
> > > >> OK, sorry I missed the CPPFLAGS. It is running now.
> > > >> Thanks,
> > > >>
> > > >> On Thu, Nov 4, 2021 at 4:43 PM Satish Balay  > > wrote:
> > > >> Multiple e-mail threads on the same issue (:
> > > >>
> > > >> As suggested in my earlier thread - add -mp=gpu to both CPPFLAGS and
> > CFLAGS [or LDFLAGS]
> > > >>
> > > >> Satish
> > > >>
> > > >> ---
> > > >> Executing: cc  -o /tmp/petsc-Vvs8_T/config.setCompilers/conftest   -g
> > -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4
> > /tmp/petsc-Vvs8_T/config.setCompilers/conftest.o
> > > >> Possible ERROR while running linker:
> > > >> stderr:
> > > >> nvc-Warning-The -gpu option has no effect unless a language-specific
> > option to enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar,
> > -cuda)
> > > >>
> > > >>
> > > >> On Thu, 4 Nov 2021, Mark Adams wrote:
> > > >>
> > > >> > It is CPPFLAGS. I seem to get the same behavior.
> > > >> >
> > > >> > FWIW, I did get this response from NERSc but I don't know how to
> > interpret
> > > >> > it.
> > > >> >
> > > >> > He seems to be saying that I don't need -mp=gpu for the device
> > compiler
> > > >> > (nvcc). He seems to think that I am adding -gpu.
> > > >> >
> > > >> > nvcc -- The device compiler does not need any of those flags
> > because it
> > > >> > already knows that it's being fed cuda code. The warning you're
> > seeing is
> > > >> > coming from nvc (which is the host / CPU side compiler) if you're
> > in the
> > > >> > PrgEnv-nvidia environment. You should not need to add -mp=gpu and
> > -cuda,
> > > >> > please just add the -cuda flag (to your host code) not to the
> > device code.
> > > >> >
> > > >> > I will try to talk with this guy again.
> > > >> >
> > > >> > Thanks,
> > > >> >
> > > >> >
> > > >> > On Thu, Nov 4, 2021 at 4:11 PM Barry Smith  > > wrote:
> > > >> >
> > > >> > >
> > > >> > >   Yes, you need to use the CPPFLAGS which maybe called CPPCFLAGS
> > I am not
> > > >> > > sure
> > > >> > >
> > > >> > >
> > > >> > > On 

Re: [petsc-dev] I am getting this error ...

2021-11-05 Thread Satish Balay via petsc-dev
For now - you could manually edit petscvariables and remove  -mp=gpu from it.

Its primarily required to make configure happy.

Satish



On Thu, 4 Nov 2021, Barry Smith wrote:

> 
>This comes from the persistent problem with PETSc's make system using too 
> many flags for compiling CUDA that have not been tested by configure. See 
> below the -mp=gpu is provided probably from the CPPFLAGS or CXXPPFLAGS(sp) 
> that is improperly used by the PETSc makefiles to compile CUDA code! 
> 
> 
> Using CUDA compile: /global/common/software/nersc/cos1.3/cuda/11.3.0/bin/nvcc 
> -o .o -I/opt/cray/pe/mpich/8.1.10/ofi/nvidia/20.7/include 
> -I/opt/cray/pe/libsci/21.08.1.2/NVIDIA/20.7/x86_64/include 
> -I/opt/cray/pe/pmi/6.0.14/include -I/opt/cray/pe/dsmml/0.2.2/dsmml//include 
> -I/opt/cray/xpmem/2.2.40-7.0.1.0_3.1__g1d7a24d.shasta/include  -g -Xcompiler 
> -rdynamic -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4 -Xcompiler 
> -fPIC -std=c++17 -gencode arch=compute_80,code=sm_80  
> -Wno-deprecated-gpu-targets  -c 
> --compiler-options=-I/global/homes/m/madams/petsc/include 
> -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include 
> -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include -mp=gpu 
> -I/global/homes/m/madams/petsc/include 
> -I/global/homes/m/madams/petsc/arch-perlmutter-opt-nvidia-cuda/include 
> -I/global/common/software/nersc/cos1.3/cuda/11.3.0/include 
> 
> 
> 
> > On Nov 4, 2021, at 7:31 PM, Mark Adams  wrote:
> > 
> > OK, configure is done.
> > Maybe I have too many -mp=gpu
> > 
> >CUDAC 
> > arch-perlmutter-opt-nvidia-cuda/obj/sys/classes/random/impls/curand/curand2.o
> > gcc: error: unrecognized command line option ‘-mp=gpu’; did you mean 
> > ‘-mpku’?
> > 
> > On Thu, Nov 4, 2021 at 5:51 PM Barry Smith  > > wrote:
> > 
> >   Need the same thing for the C++ preprocessor flag
> > 
> > 
> > 
> >> On Nov 4, 2021, at 5:44 PM, Mark Adams  >> > wrote:
> >> 
> >> It gets a lot further.
> >> 
> >> On Thu, Nov 4, 2021 at 5:32 PM Mark Adams  >> > wrote:
> >> OK, sorry I missed the CPPFLAGS. It is running now.
> >> Thanks,
> >> 
> >> On Thu, Nov 4, 2021 at 4:43 PM Satish Balay  >> > wrote:
> >> Multiple e-mail threads on the same issue (:
> >> 
> >> As suggested in my earlier thread - add -mp=gpu to both CPPFLAGS and 
> >> CFLAGS [or LDFLAGS]
> >> 
> >> Satish
> >> 
> >> ---
> >> Executing: cc  -o /tmp/petsc-Vvs8_T/config.setCompilers/conftest   -g 
> >> -DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4 
> >> /tmp/petsc-Vvs8_T/config.setCompilers/conftest.o
> >> Possible ERROR while running linker:
> >> stderr:
> >> nvc-Warning-The -gpu option has no effect unless a language-specific 
> >> option to enable GPU code generation is used (e.g.: -acc, -mp=gpu, 
> >> -stdpar, -cuda)
> >> 
> >> 
> >> On Thu, 4 Nov 2021, Mark Adams wrote:
> >> 
> >> > It is CPPFLAGS. I seem to get the same behavior.
> >> > 
> >> > FWIW, I did get this response from NERSc but I don't know how to 
> >> > interpret
> >> > it.
> >> > 
> >> > He seems to be saying that I don't need -mp=gpu for the device compiler
> >> > (nvcc). He seems to think that I am adding -gpu.
> >> > 
> >> > nvcc -- The device compiler does not need any of those flags because it
> >> > already knows that it's being fed cuda code. The warning you're seeing is
> >> > coming from nvc (which is the host / CPU side compiler) if you're in the
> >> > PrgEnv-nvidia environment. You should not need to add -mp=gpu and -cuda,
> >> > please just add the -cuda flag (to your host code) not to the device 
> >> > code.
> >> > 
> >> > I will try to talk with this guy again.
> >> > 
> >> > Thanks,
> >> > 
> >> > 
> >> > On Thu, Nov 4, 2021 at 4:11 PM Barry Smith  >> > > wrote:
> >> > 
> >> > >
> >> > >   Yes, you need to use the CPPFLAGS which maybe called CPPCFLAGS I am 
> >> > > not
> >> > > sure
> >> > >
> >> > >
> >> > > On Nov 4, 2021, at 3:23 PM, Mark Adams  >> > > > wrote:
> >> > >
> >> > > Ah, CCFLAGS does not seem to work.
> >> > >
> >> > > On Thu, Nov 4, 2021 at 3:07 PM Barry Smith  >> > > > wrote:
> >> > >
> >> > >>
> >> > >>   You have to pass in the flag to turn off the bitching about -gpu to 
> >> > >> the
> >> > >> C preprocessor, not the C compiler.
> >> > >>
> >> > >>
> >> > >> stderr:
> >> > >> nvc-Warning-The -gpu option has no effect unless a language-specific
> >> > >> option to enable GPU code generation is used (e.g.: -acc, -mp=gpu, 
> >> > >> -stdpar,
> >> > >> -cuda)
> >> > >> Source:
> >> > >> #include "confdefs.h"
> >> > >> #include "conffix.h"
> >> > >> #include 
> >> > >>
> >> > >>
> >> > >>
> >> > >> > On Nov 4, 2021, at 2:49 PM, Mark Adams  >> > >> > > wrote:
> >> > >> >
> >> > >> > on Perlmutter with nvhpc:
> >> > >> >
> >> > >> >   Defined make macro "CPP" to "cc --use cpp32"
> >> > >> > 

Re: [petsc-dev] I am getting this error ...

2021-11-04 Thread Satish Balay via petsc-dev
Multiple e-mail threads on the same issue (:

As suggested in my earlier thread - add -mp=gpu to both CPPFLAGS and CFLAGS [or 
LDFLAGS]

Satish

---
Executing: cc  -o /tmp/petsc-Vvs8_T/config.setCompilers/conftest   -g 
-DLANDAU_DIM=2 -DLANDAU_MAX_SPECIES=10 -DLANDAU_MAX_Q=4 
/tmp/petsc-Vvs8_T/config.setCompilers/conftest.o
Possible ERROR while running linker:
stderr:
nvc-Warning-The -gpu option has no effect unless a language-specific option to 
enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar, -cuda)


On Thu, 4 Nov 2021, Mark Adams wrote:

> It is CPPFLAGS. I seem to get the same behavior.
> 
> FWIW, I did get this response from NERSc but I don't know how to interpret
> it.
> 
> He seems to be saying that I don't need -mp=gpu for the device compiler
> (nvcc). He seems to think that I am adding -gpu.
> 
> nvcc -- The device compiler does not need any of those flags because it
> already knows that it's being fed cuda code. The warning you're seeing is
> coming from nvc (which is the host / CPU side compiler) if you're in the
> PrgEnv-nvidia environment. You should not need to add -mp=gpu and -cuda,
> please just add the -cuda flag (to your host code) not to the device code.
> 
> I will try to talk with this guy again.
> 
> Thanks,
> 
> 
> On Thu, Nov 4, 2021 at 4:11 PM Barry Smith  wrote:
> 
> >
> >   Yes, you need to use the CPPFLAGS which maybe called CPPCFLAGS I am not
> > sure
> >
> >
> > On Nov 4, 2021, at 3:23 PM, Mark Adams  wrote:
> >
> > Ah, CCFLAGS does not seem to work.
> >
> > On Thu, Nov 4, 2021 at 3:07 PM Barry Smith  wrote:
> >
> >>
> >>   You have to pass in the flag to turn off the bitching about -gpu to the
> >> C preprocessor, not the C compiler.
> >>
> >>
> >> stderr:
> >> nvc-Warning-The -gpu option has no effect unless a language-specific
> >> option to enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar,
> >> -cuda)
> >> Source:
> >> #include "confdefs.h"
> >> #include "conffix.h"
> >> #include 
> >>
> >>
> >>
> >> > On Nov 4, 2021, at 2:49 PM, Mark Adams  wrote:
> >> >
> >> > on Perlmutter with nvhpc:
> >> >
> >> >   Defined make macro "CPP" to "cc --use cpp32"
> >> > Preprocessing source:
> >> > #include "confdefs.h"
> >> > #include "conffix.h"
> >> > #include 
> >> >
> >> > Executing: cc --use cpp32  -I/tmp/petsc-jV9U1b/config.setCompilers
> >> /tmp/petsc-jV9U1b/config.setCompilers/conftest.c
> >> > Possible ERROR while running preprocessor: exit code 1
> >> > stderr:
> >> > nvc-Error-Unknown switch: --use
> >> > Source:
> >> > 
> >>
> >> 
> >
> >
> >
> 



Re: [petsc-dev] invocation of nvcc

2021-11-04 Thread Satish Balay via petsc-dev
Then you can try:

CPPFLAGS=-mp-gpu
CFLAGS=-mp-gpu

Satish

On Thu, 4 Nov 2021, Mark Adams wrote:

> But I can get rid of that with: cc -E -mp-gpu
> I expect that is what they will say.
> 
> On Thu, Nov 4, 2021 at 1:40 PM Satish Balay  wrote:
> 
> > I think we went through this issue before.
> >
> > nvc is the 'c' compiler. And for some reason its giving 'cuda' warnings.
> >
> > I think you might have switched progenv last time [and avoided this
> > compiler]
> >
> > You can try using this compiler manually - on simple code - and then seek
> > help from the admins on how to avoid these warnings..
> >
> > Satish
> >
> > - create test.c with:
> >
> > #include 
> >
> > compile (or preporcess):
> >
> > cc -E  test.c
> >
> > - You should get:
> >
> > stderr:
> > nvc-Warning-The -gpu option has no effect unless a language-specific
> > option to enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar,
> > -cuda)
> >
> > Now provide this info to the machine admins [or nvidia folk] - and ask how
> > to get rid of this message.
> >
> > Satish
> >
> >
> > On Thu, 4 Nov 2021, Mark Adams wrote:
> >
> > > Correction it is nvc:
> > >
> > > nvc-Warning-The -gpu option has no effect unless a language-specific
> > option
> > > to enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar,
> > -cuda)
> > >
> > > And I add  -mp=gpu to CUDAFLAGS
> > >
> > >
> > > On Thu, Nov 4, 2021 at 1:09 PM Satish Balay  wrote:
> > >
> > > >
> > > > On Thu, 4 Nov 2021, Mark Adams wrote:
> > > >
> > > > > Does anyone know if PETSc calls nvcc and hence can add flags to the
> > > > > invocation? nvcc wants a flag like -mp=gpu but I don't know if we do
> > that
> > > > > or a compiler wrapper in the environment.
> > > >
> > > > $ ./configure --help |grep CUDA
> > > > 
> > > >   --CUDAC=
> > > >Specify the CUDA compiler
> > > >   --CUDAFLAGS=
> > > >Specify the CUDA compiler options
> > > > 
> > > >
> > > > Satish
> > > >
> > >
> >
> >
> 



Re: [petsc-dev] invocation of nvcc

2021-11-04 Thread Satish Balay via petsc-dev
I think we went through this issue before.

nvc is the 'c' compiler. And for some reason its giving 'cuda' warnings.

I think you might have switched progenv last time [and avoided this compiler]

You can try using this compiler manually - on simple code - and then seek help 
from the admins on how to avoid these warnings..

Satish

- create test.c with:

#include 

compile (or preporcess):

cc -E  test.c

- You should get:

stderr:
nvc-Warning-The -gpu option has no effect unless a language-specific option to 
enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar, -cuda)

Now provide this info to the machine admins [or nvidia folk] - and ask how to 
get rid of this message.

Satish


On Thu, 4 Nov 2021, Mark Adams wrote:

> Correction it is nvc:
> 
> nvc-Warning-The -gpu option has no effect unless a language-specific option
> to enable GPU code generation is used (e.g.: -acc, -mp=gpu, -stdpar, -cuda)
> 
> And I add  -mp=gpu to CUDAFLAGS
> 
> 
> On Thu, Nov 4, 2021 at 1:09 PM Satish Balay  wrote:
> 
> >
> > On Thu, 4 Nov 2021, Mark Adams wrote:
> >
> > > Does anyone know if PETSc calls nvcc and hence can add flags to the
> > > invocation? nvcc wants a flag like -mp=gpu but I don't know if we do that
> > > or a compiler wrapper in the environment.
> >
> > $ ./configure --help |grep CUDA
> > 
> >   --CUDAC=
> >Specify the CUDA compiler
> >   --CUDAFLAGS=
> >Specify the CUDA compiler options
> > 
> >
> > Satish
> >
> 



Re: [petsc-dev] invocation of nvcc

2021-11-04 Thread Satish Balay via petsc-dev


On Thu, 4 Nov 2021, Mark Adams wrote:

> Does anyone know if PETSc calls nvcc and hence can add flags to the
> invocation? nvcc wants a flag like -mp=gpu but I don't know if we do that
> or a compiler wrapper in the environment.

$ ./configure --help |grep CUDA

  --CUDAC=
   Specify the CUDA compiler
  --CUDAFLAGS=
   Specify the CUDA compiler options


Satish


[petsc-dev] petsc-3.16.1 now available

2021-11-02 Thread Satish Balay via petsc-dev
Dear PETSc users,

The patch release petsc-3.16.1 is now available for download.

https://petsc.org/release/download/

Satish




Re: [petsc-dev] CI is failing on two of my MRs in docs-rev???

2021-10-20 Thread Satish Balay via petsc-dev
Should be fixed in latest main - so you can try starting a new pipeline

Satish

On Wed, 20 Oct 2021, Mark Adams wrote:

> 
> 



Re: [petsc-dev] libpetsc.so: undefined references

2021-10-02 Thread Satish Balay via petsc-dev
Fix at https://gitlab.com/petsc/petsc/-/merge_requests/4402

thanks,
Satish

On Sat, 2 Oct 2021, Jacob Faibussowitsch wrote:

> Unrelated to the below (still reading the configure.log) but it looks like 
> there’s a bug in the cuda compiler search:
> 
> TESTING: checkCUDACompiler from 
> config.setCompilers(/builddir/build/BUILD/petsc-3.16.0/petsc-3.16.0/config/BuildSystem/config/setCompilers.py:862)
>   Locate a functional CUDA compiler
> Checking for program /usr/bin/nvcc...not found
> Checking for program /bin/nvcc...not found
> Checking for program /usr/sbin/nvcc...not found
> Checking for program /sbin/nvcc...not found
> Checking for program /usr/local/sbin/nvcc...not found
> Checking for program 
> /builddir/build/BUILD/petsc-3.16.0/petsc-3.16.0/lib/petsc/bin/win32fe/nvcc...not
>  found
> Checking for program /Developer/NVIDIA/CUDA-6.5/bin/nvcc...not found
> Checking for program 
> /builddir/build/BUILD/petsc-3.16.0/petsc-3.16.0/lib/petsc/bin/win32fe/nvcc...not
>  found
>   Unable to find programs ['nvcc'] providing listing of the specific search 
> path
>   Warning accessing /Developer/NVIDIA/CUDA-6.5/bin gives errors: can only 
> concatenate str (not "builtin_function_or_method") to str
> Checking for program /usr/local/cuda/bin/nvcc...not found
> Checking for program 
> /builddir/build/BUILD/petsc-3.16.0/petsc-3.16.0/lib/petsc/bin/win32fe/nvcc...not
>  found
>   Unable to find programs ['nvcc'] providing listing of the specific search 
> path
>   Warning accessing /usr/local/cuda/bin gives errors: can only 
> concatenate str (not "builtin_function_or_method") to str
> 
> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> 
> > On Oct 2, 2021, at 08:18, Stefano Zampini  wrote:
> > 
> > I knew this was coming
> > https://gitlab.com/petsc/petsc/-/issues/997 
> > 
> > 
> > Il Sab 2 Ott 2021, 15:48 Antonio T. sagitter  > > ha scritto:
> > Hi all.
> > 
> > In PETSc-3.16.0, the linker is not working because of these undefined 
> > references (see https://pastebin.com/izGTfmMp 
> > ):
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `operator 
> > delete(void*, unsigned long)'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `__cxa_rethrow'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `__gxx_personality_v0'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `operator 
> > new(unsigned long)'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `std::__throw_bad_alloc()'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `std::terminate()'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `std::__throw_bad_array_new_length()'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `typeinfo 
> > for std::exception'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `__cxa_begin_catch'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `__cxa_end_catch'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `std::__throw_length_error(char const*)'
> > 
> > collect2: error: ld returned 1 exit status
> > 
> > There are also
> > 
> > $ ldd -r build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so'
> > Start: shell
> > 
> > linux-vdso.so.1 (0x7fffbf347000)
> > 
> > libsuperlu.so.5.2 => /lib64/libsuperlu.so.5.2 (0x7f00dad0c000)
> > 
> > libflexiblas.so.3 => /lib64/libflexiblas.so.3 (0x7f00da95a000)
> > 
> > libcgns.so.4.2 => /lib64/libcgns.so.4.2 (0x7f00da873000)
> > 
> > libhdf5.so.103 => /lib64/libhdf5.so.103 (0x7f00da4d8000)
> > 
> > libm.so.6 => /lib64/libm.so.6 (0x7f00da3f8000)
> > 
> > libX11.so.6 => /lib64/libX11.so.6 (0x7f00da2ae000)
> > 
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f00da294000)
> > 
> > libc.so.6 => /lib64/libc.so.6 (0x7f00da08b000)
> > 
> > libgfortran.so.5 => /lib64/libgfortran.so.5 (0x7f00d9dde000)
> > 
> > libquadmath.so.0 => /lib64/libquadmath.so.0 (0x7f00d9d94000)
> > 
> > /lib64/ld-linux-x86-64.so.2 (0x7f00dc18b000)
> > 
> > libsz.so.2 => /lib64/libsz.so.2 (0x7f00d9d8a000)
> > 
> > libz.so.1 => /lib64/libz.so.1 (0x7f00d9d6e000)
> > 
> > libxcb.so.1 => /lib64/libxcb.so.1 (0x7f00d9d43000)
> > 
> > libXau.so.6 => /lib64/libXau.so.6 (0x7f00d9d3d000)
> > 
> > undefined symbol: _ZTISt9exception 
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: __gxx_personality_v0 
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: _ZdlPvm 
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: 

Re: [petsc-dev] libpetsc.so: undefined references

2021-10-02 Thread Satish Balay via petsc-dev
BTW: LIBS is more appropriate as its a difference of:

gcc -lstdc++ ex19.o -lpetsc
vs
gcc ex19.o -lpetsc -lstdc++

i.e

$CLINKER $CC_LINKER_FLAGS $OBJ $PETSC_LIB $LIBS
$FLINKER $FC_LINKER_FLAGS $OBJ $PETSC_LIB $LIBS

Satish

On Sat, 2 Oct 2021, Satish Balay via petsc-dev wrote:

>  --with-clib-autodetect=0 --with-fortranlib-autodetect=0 
> --with-cxxlib-autodetect=0
>  --CC_LINKER_FLAGS="-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now 
> -specs=/usr/lib/rpm/redhat/redhat-hardened-ld 
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 "
>  --FC_LINKER_FLAGS="-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now 
> -specs=/usr/lib/rpm/redhat/redhat-hardened-ld 
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -lgfortran" 
> 
> 
> Since autodetect is disabled - its expected the required compiler libraries 
> are passed in via LIBS or in this case CC_LINKER_FLAGS, FC_LINKER_FLAGS
> 
> i.e add -lstdc++ to both.
> 
> I'm surprised that you didn't need -lgfortran with CC_LINKER_FLAGS in there 
> as well - but get [as libpetsc.so is built with CLINKER]
> 
> >   libgfortran.so.5 => /lib64/libgfortran.so.5 (0x7f00d9dde000)
> 
> 
> 
> Satish
> 
> On Sat, 2 Oct 2021, Antonio T. sagitter wrote:
> 
> > Hi all.
> > 
> > In PETSc-3.16.0, the linker is not working because of these undefined
> > references (see https://pastebin.com/izGTfmMp):
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `operator
> > delete(void*, unsigned long)'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `__cxa_rethrow'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> > `__gxx_personality_v0'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `operator
> > new(unsigned long)'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> > `std::__throw_bad_alloc()'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `std::terminate()'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> > `std::__throw_bad_array_new_length()'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `typeinfo for
> > std::exception'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> > `__cxa_begin_catch'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to 
> > `__cxa_end_catch'
> > 
> > /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> > `std::__throw_length_error(char const*)'
> > 
> > collect2: error: ld returned 1 exit status
> > 
> > There are also
> > 
> > $ ldd -r build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so'
> > Start: shell
> > 
> > linux-vdso.so.1 (0x7fffbf347000)
> > 
> > libsuperlu.so.5.2 => /lib64/libsuperlu.so.5.2 (0x7f00dad0c000)
> > 
> > libflexiblas.so.3 => /lib64/libflexiblas.so.3 (0x7f00da95a000)
> > 
> > libcgns.so.4.2 => /lib64/libcgns.so.4.2 (0x7f00da873000)
> > 
> > libhdf5.so.103 => /lib64/libhdf5.so.103 (0x7f00da4d8000)
> > 
> > libm.so.6 => /lib64/libm.so.6 (0x7f00da3f8000)
> > 
> > libX11.so.6 => /lib64/libX11.so.6 (0x7f00da2ae000)
> > 
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f00da294000)
> > 
> > libc.so.6 => /lib64/libc.so.6 (0x7f00da08b000)
> > 
> > libgfortran.so.5 => /lib64/libgfortran.so.5 (0x7f00d9dde000)
> > 
> > libquadmath.so.0 => /lib64/libquadmath.so.0 (0x7f00d9d94000)
> > 
> > /lib64/ld-linux-x86-64.so.2 (0x7f00dc18b000)
> > 
> > libsz.so.2 => /lib64/libsz.so.2 (0x7f00d9d8a000)
> > 
> > libz.so.1 => /lib64/libz.so.1 (0x7f00d9d6e000)
> > 
> > libxcb.so.1 => /lib64/libxcb.so.1 (0x7f00d9d43000)
> > 
> > libXau.so.6 => /lib64/libXau.so.6 (0x7f00d9d3d000)
> > 
> > undefined symbol: _ZTISt9exception
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: __gxx_personality_v0
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: _ZdlPvm
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: __cxa_rethrow
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: _Znwm
> > (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> > 
> > undefined symbol: _ZSt17__thro

Re: [petsc-dev] libpetsc.so: undefined references

2021-10-02 Thread Satish Balay via petsc-dev
 --with-clib-autodetect=0 --with-fortranlib-autodetect=0 
--with-cxxlib-autodetect=0
 --CC_LINKER_FLAGS="-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 "
 --FC_LINKER_FLAGS="-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -lgfortran" 


Since autodetect is disabled - its expected the required compiler libraries are 
passed in via LIBS or in this case CC_LINKER_FLAGS, FC_LINKER_FLAGS

i.e add -lstdc++ to both.

I'm surprised that you didn't need -lgfortran with CC_LINKER_FLAGS in there as 
well - but get [as libpetsc.so is built with CLINKER]

>   libgfortran.so.5 => /lib64/libgfortran.so.5 (0x7f00d9dde000)



Satish

On Sat, 2 Oct 2021, Antonio T. sagitter wrote:

> Hi all.
> 
> In PETSc-3.16.0, the linker is not working because of these undefined
> references (see https://pastebin.com/izGTfmMp):
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `operator
> delete(void*, unsigned long)'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `__cxa_rethrow'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> `__gxx_personality_v0'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `operator
> new(unsigned long)'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> `std::__throw_bad_alloc()'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `std::terminate()'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> `std::__throw_bad_array_new_length()'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `typeinfo for
> std::exception'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> `__cxa_begin_catch'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to `__cxa_end_catch'
> 
> /usr/bin/ld: x86_64/lib/libpetsc.so: undefined reference to
> `std::__throw_length_error(char const*)'
> 
> collect2: error: ld returned 1 exit status
> 
> There are also
> 
> $ ldd -r build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so'
> Start: shell
> 
>   linux-vdso.so.1 (0x7fffbf347000)
> 
>   libsuperlu.so.5.2 => /lib64/libsuperlu.so.5.2 (0x7f00dad0c000)
> 
>   libflexiblas.so.3 => /lib64/libflexiblas.so.3 (0x7f00da95a000)
> 
>   libcgns.so.4.2 => /lib64/libcgns.so.4.2 (0x7f00da873000)
> 
>   libhdf5.so.103 => /lib64/libhdf5.so.103 (0x7f00da4d8000)
> 
>   libm.so.6 => /lib64/libm.so.6 (0x7f00da3f8000)
> 
>   libX11.so.6 => /lib64/libX11.so.6 (0x7f00da2ae000)
> 
>   libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f00da294000)
> 
>   libc.so.6 => /lib64/libc.so.6 (0x7f00da08b000)
> 
>   libgfortran.so.5 => /lib64/libgfortran.so.5 (0x7f00d9dde000)
> 
>   libquadmath.so.0 => /lib64/libquadmath.so.0 (0x7f00d9d94000)
> 
>   /lib64/ld-linux-x86-64.so.2 (0x7f00dc18b000)
> 
>   libsz.so.2 => /lib64/libsz.so.2 (0x7f00d9d8a000)
> 
>   libz.so.1 => /lib64/libz.so.1 (0x7f00d9d6e000)
> 
>   libxcb.so.1 => /lib64/libxcb.so.1 (0x7f00d9d43000)
> 
>   libXau.so.6 => /lib64/libXau.so.6 (0x7f00d9d3d000)
> 
> undefined symbol: _ZTISt9exception
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: __gxx_personality_v0
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: _ZdlPvm
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: __cxa_rethrow
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: _Znwm
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: _ZSt17__throw_bad_allocv
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: _ZSt9terminatev
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: _ZSt28__throw_bad_array_new_lengthv
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: __cxa_begin_catch
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: __cxa_end_catch
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> undefined symbol: _ZSt20__throw_length_errorPKc
> (build/BUILD/petsc-3.16.0/petsc-3.16.0/x86_64/lib/libpetsc.so)
> 
> 
> 
> I'm attaching configure.log and make.log
> 
> --
> ---
> Antonio Trande
> Fedora Project
> mailto: sagit...@fedoraproject.org
> GPG key: 0x29FBC85D7A51CC2F
> GPG key server: https://keyserver1.pgp.com/
> 
> 



  1   2   3   4   5   >