Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Nicholas Arnold-Medabalimi
Hi Matt I made a typo on the line statVecV(offset) = in my example, I agree. (I wrote that offhand since the actual assignment is much larger) I should be statVecV(offset+1) = so I'm confident it's not a 1 0 indexing thing. My question is more related to what is happening in the offsets. c0

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 9:37 AM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > Hi Matt > > I made a typo on the line statVecV(offset) = in my > example, I agree. (I wrote that offhand since the actual assignment is much > larger) I should be statVecV(offset+1) = so I'm confident it's

[petsc-users] Petsc DMLabel Fortran Stub request

2023-01-06 Thread Nicholas Arnold-Medabalimi
Hi Petsc Users I am trying to use the sequence of call DMLabelPropagateBegin(synchLabel,sf,ierr) call DMLabelPropagatePush(synchLabel,sf,PETSC_NULL_OPTIONS,PETSC_NULL_INTEGER,ierr) call DMLabelPropagateEnd(synchLabel,sf, ierr) in fortran. I apologize if I messed something up, it appears as if

Re: [petsc-users] cuda gpu eager initialization error cudaErrorNotSupported

2023-01-06 Thread Mark Lohry
It built+ran fine on a different system with an sm75 arch. Is there a documented minimum version if that indeed is the cause? One minor hiccup FYI -- compilation of hypre fails with cuda toolkit 12, due to cusprase removing csrsv2Info_t (although it's still referenced in their docs...) in favor

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 2:28 AM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > Hi Petsc Users, > > I'm working with a dmplex system with a subsampled mesh distributed with > an overlap of 1. > > I'm encountering unusual situations when using VecGetOwnershipRange to > adjust the offset

Re: [petsc-users] cuda gpu eager initialization error cudaErrorNotSupported

2023-01-06 Thread Mark Lohry
These cards do indeed not support cudaDeviceGetMemPool -- cudaDeviceGetAttribute on cudaDevAttrMemoryPoolsSupported return false, meaning it doesn't support cudaMallocAsync, so the first point of failure is the call to cudaDeviceGetMemPool in the initialization. Would a workaround be to replace

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Nicholas Arnold-Medabalimi
Hi Matt I apologize for any lack of clarity in the initial email. looking at the initial output on rank 1 write(*,*) "cell",i,"offset",offset,'oStart',oStart, offset-oStart cell 0 offset2475 oStart2640-165 cell 1 offset2530 oStart2640

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Nicholas Arnold-Medabalimi
Apologies. If it helps, there is one cell of overlap in this small test case for a 2D mesh that is 1 cell in height and a number of cells in length. . process 0 Petsc VecGetLocalSize2750 size(stateVecV)2750 process 1 Petsc VecGetLocalSize2640 size(stateVecV)

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 9:56 AM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > Apologies. If it helps, there is one cell of overlap in this small test > case for a 2D mesh that is 1 cell in height and a number of cells in > length. . > > process 0 > Petsc VecGetLocalSize2750 >

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 10:10 AM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > Hi Matt > > I apologize for any lack of clarity in the initial email. > > looking at the initial output on rank 1 > write(*,*) "cell",i,"offset",offset,'oStart',oStart, offset-oStart > cell 0

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Nicholas Arnold-Medabalimi
Hi Matt I appreciate the help. The section view is quite extensive because each cell has 55 dofs located at the cells and on certain faces. I've appended the first of these which corresponds with the output in the first email, to save space. The following 54 are exactly the same but offset

Re: [petsc-users] error when trying to compile with HPDDM

2023-01-06 Thread Jose E. Roman
This happens because you have typed 'make -j128'. If you just do 'make', PETSc will choose a reasonable value (-j59 in your case). Satish: do we want to support this use case? Then a possible fix is: diff --git a/config/BuildSystem/config/packages/slepc.py

Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Pierre Jolivet
> On 6 Jan 2023, at 4:49 PM, Danyang Su wrote: > > Hi All, > > I get ‘Error running configure on HDF5’ in PETSc-3.18.3 on MacOS, but no > problem on Ubuntu. Attached is the configuration log file. > > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps >

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Nicholas Arnold-Medabalimi
I am doing that as well (although not for vertex dofs). And I had it working quite well for purely cell-associated DOFs. But I realized later that I also wanted to transmit some DOFs associated with faces so I suspect I'm messing something up there. Something we discussed back on 12/26 (email

Re: [petsc-users] Definition of threshold(s) in gamg and boomerAMG

2023-01-06 Thread Mark Adams
These thresholds are for completely different coursening algorithms. GAMG just drops edges below the threshold and hypre (classical AMG) does something completely different. Mark On Fri, Jan 6, 2023 at 10:43 AM Edoardo Centofanti < edoardo.centofant...@universitadipavia.it> wrote: > Hi PETSc

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 10:41 AM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > Hi Matt > > I appreciate the help. The section view is quite extensive because each > cell has 55 dofs located at the cells and on certain faces. I've appended > the first of these which corresponds with the

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Nicholas Arnold-Medabalimi
Hi Matt This was generated using the DMPlexDistributeField which we discussed a while back. Everything seemed to be working fine when I only had cells dofs but I recently added face dofs, which seems to have caused some issues. Whats weird is that I'm feeding the same distribution SF and

[petsc-users] Definition of threshold(s) in gamg and boomerAMG

2023-01-06 Thread Edoardo Centofanti
Hi PETSc users, I was looking for the exact definitions of the threshold parameter (-pc_gamg_threshold) for gamg and of the strong threshold (-pc_hypre_boomeramg_strong_threshold) for Hypre BoomerAMG. My curiosity comes from the fact that the suggested parameters (apparently acting on the same

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 11:32 AM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > Hi Matt > > This was generated using the DMPlexDistributeField which we discussed a > while back. Everything seemed to be working fine when I only had cells dofs > but I recently added face dofs, which seems

Re: [petsc-users] cuda gpu eager initialization error cudaErrorNotSupported

2023-01-06 Thread Jacob Faibussowitsch
Hmm I suspect the problem is that GPU is simply too old yes, but perhaps there is a simple enough workaround available in the code as you suggest. I will investigate further on Monday.Best regards,Jacob Faibussowitsch(Jacob Fai - booss - oh - vitch)On Jan 6, 2023, at 09:55, Mark Lohry

Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Barry Smith
Please email your latest configure.log (with no Conda stuff) to petsc-ma...@mcs.anl.gov The configuration of HDF5 (done by HDF5) is objecting to some particular aspect of your current Fortran compiler, we need to figure out the exact objection. Barry

Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Satish Balay via petsc-users
Likely your installed gfortran is incompatible with hdf5 Executing: gfortran --version stdout: GNU Fortran (GCC) 8.2.0 We generally use brew gfortran - and that works with hdf5 aswell balay@ypro ~ % gfortran --version GNU Fortran (Homebrew GCC 11.2.0_1) 11.2.0 Satish On Fri, 6 Jan

Re: [petsc-users] Getting global indices of vector distributed among different processes.

2023-01-06 Thread Venugopal, Vysakh (venugovh) via petsc-users
Thank you, Matthew! From: Matthew Knepley Sent: Wednesday, January 4, 2023 10:52 AM To: Venugopal, Vysakh (venugovh) Cc: petsc-users@mcs.anl.gov Subject: Re: [petsc-users] Getting global indices of vector distributed among different processes. External Email: Use Caution On Wed, Jan 4,

[petsc-users] Getting correct local size using VecScatterCreateToAll

2023-01-06 Thread Venugopal, Vysakh (venugovh) via petsc-users
Hello, I have created a global vector V using DMCreateGlobalVector of size m. For n processes, the local size of V is m/n. Subsequently, I am using VecScatterCreateToAll to get a sequential copy of the V, let's call it V_seq of local size m. It passes through a function and outputs the vector

Re: [petsc-users] How to install in /usr/lib64 instead of /usr/lib?

2023-01-06 Thread Jed Brown
The make convention would be to respond to `libdir`, which is probably the simplest if we can defer that choice until install time. It probably needs to be known at build time, thus should go in configure. https://www.gnu.org/software/make/manual/html_node/Directory-Variables.html Satish Balay

Re: [petsc-users] MatCreateSeqAIJWithArrays for GPU / cusparse

2023-01-06 Thread Mark Lohry
Well, I think it's a moderately crazy idea unless it's less painful to implement than I'm thinking. Is there a use case for a mixed device system where one petsc executable might be addressing both a HIP and CUDA device beyond some frankenstein test system somebody cooked up? In all my code I

Re: [petsc-users] MatCreateSeqAIJWithArrays for GPU / cusparse

2023-01-06 Thread Junchao Zhang
On Fri, Jan 6, 2023 at 7:35 PM Mark Lohry wrote: > Well, I think it's a moderately crazy idea unless it's less painful to > implement than I'm thinking. Is there a use case for a mixed device system > where one petsc executable might be addressing both a HIP and CUDA device > beyond some

Re: [petsc-users] Vec Ownership ranges with Global Section Offsets

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 12:18 PM Nicholas Arnold-Medabalimi < narno...@umich.edu> wrote: > I am doing that as well (although not for vertex dofs). And I had it > working quite well for purely cell-associated DOFs. But I realized later > that I also wanted to transmit some DOFs associated with

Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Danyang Su
Hi Pierre, I have tried to exclude Conda related environment variables but it does not work. Instead, if I include ‘--download-hdf5=yes’ but exclude ‘--with-hdf5-fortran-bindings’ in the configuration, PETSc can be configured and installed without problem, even with Conda related

Re: [petsc-users] Error running configure on HDF5 in PETSc-3.18.3

2023-01-06 Thread Danyang Su
Hi All, Problem is resolved by Homebrew Gfortran. I use GNU Fortran (GCC) 8.2.0 before. Conda does not cause the problem. Thanks, Danyang On 2023-01-06, 2:24 PM, "Satish Balay" mailto:ba...@mcs.anl.gov>> wrote: Likely your installed gfortran is incompatible with hdf5 Executing:

Re: [petsc-users] Getting correct local size using VecScatterCreateToAll

2023-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2023 at 6:22 PM Venugopal, Vysakh (venugovh) via petsc-users wrote: > Hello, > > > > I have created a global vector V using DMCreateGlobalVector of size m. For > n processes, the local size of V is m/n. > > > > Subsequently, I am using VecScatterCreateToAll to get a sequential