Re: [petsc-users] Parallel processes run significantly slower

2024-01-12 Thread Junchao Zhang
Hi, Steffen, It is probably because your laptop CPU is "weak". I have a local machine with one Intel Core i7 processor, which has 8 cores (16 hardware threads). I got a similar STREAM speedup. It just means 1~2 MPI ranks can use up all the memory bandwidth. That is why with your (weak scaling)

Re: [petsc-users] Parallel processes run significantly slower

2024-01-12 Thread Steffen Wilksen | Universitaet Bremen
Hi Junchao, I tried it out, but unfortunately, this does not seem to give any imporvements, the code is still much slower when starting more processes. - Message from Junchao Zhang -    Date: Fri, 12 Jan 2024 09:41:39 -0600    From: Junchao Zhang Subject: Re: [petsc-users]

Re: [petsc-users] Help with Integrating PETSc into Fortran Groundwater Flow Simulation Code

2024-01-12 Thread Junchao Zhang
Hi, Sawsan, First in test_main.F90, you need to call VecGetArrayF90(temp_solution, H_vector, ierr) and VecRestoreArrayF90 (temp_solution, H_vector, ierr) as Barry mentioned. Secondly, in the loop of test_main.F90, it calls GW_solver(). Within it, it calls PetscInitialize()/PetscFinalize().

Re: [petsc-users] Help with Integrating PETSc into Fortran Groundwater Flow Simulation Code

2024-01-12 Thread Barry Smith
PETSc vectors contain inside themselves an array with the numerical values. VecGetArrayF90() exposes this array to Fortran so you may access the values in that array. So VecGetArrayF90() does not create a new array, it gives you temporary access to an already existing array inside the

Re: [petsc-users] Parallel processes run significantly slower

2024-01-12 Thread Junchao Zhang
Hi, Steffen, Would it be an MPI process binding issue? Could you try running with mpiexec --bind-to core -n N python parallel_example.py --Junchao Zhang On Fri, Jan 12, 2024 at 8:52 AM Steffen Wilksen | Universitaet Bremen < swilk...@itp.uni-bremen.de> wrote: > Thank you for your

Re: [petsc-users] [Gmsh] Access both default sets and region names

2024-01-12 Thread Matthew Knepley
On Fri, Jan 12, 2024 at 5:26 AM Noam T. wrote: > Great. > > Thank you very much for the quick replies. > It has now merged to the main branch. Thanks, Matt > Noam > On Thursday, January 11th, 2024 at 9:34 PM, Matthew Knepley < > knep...@gmail.com> wrote: > > On Thu, Jan 11, 2024 at

Re: [petsc-users] Parallel processes run significantly slower

2024-01-12 Thread Steffen Wilksen | Universitaet Bremen
Thank you for your feedback. @Stefano: the use of my communicator was intentional, since I later intend to distribute M independent calculations to N processes, each process then only needing to do M/N calculations. Of course I don't expect speed up in my example since the number of

Re: [petsc-users] [Gmsh] Access both default sets and region names

2024-01-12 Thread Noam T. via petsc-users
Great. Thank you very much for the quick replies. Noam On Thursday, January 11th, 2024 at 9:34 PM, Matthew Knepley wrote: > On Thu, Jan 11, 2024 at 12:53 PM Noam T. wrote: > >> There could be some overlapping/redundancy between the default and the >> user-defined groups, so perhaps that was