Re: [petsc-users] Performance problem using COO interface

2023-01-17 Thread Zhang, Junchao via petsc-users
Hi, Philip, Could you add -log_view and see what functions are used in the solve? Since it is CPU-only, perhaps with -log_view of different runs, we can easily see which functions slowed down. --Junchao Zhang From: Fackler, Philip Sent: Tuesday, January 17,

Re: [petsc-users] Poor speed up for KSP example 45

2020-03-25 Thread Zhang, Junchao via petsc-users
MPI rank distribution (e.g., 8 ranks per node or 16 ranks per node) is usually managed by workload managers like Slurm, PBS through your job scripts, which is out of petsc’s control. From: Amin Sadeghi Date: Wednesday, March 25, 2020 at 4:40 PM To: Junchao Zhang Cc: Mark Adams , PETSc users

Re: [petsc-users] Choosing VecScatter Method in Matrix-Vector Product

2020-01-27 Thread Zhang, Junchao via petsc-users
--Junchao Zhang On Mon, Jan 27, 2020 at 10:09 AM Felix Huber mailto:st107...@stud.uni-stuttgart.de>> wrote: Thank you all for you reply! > Are you using a KSP/PC configuration which should weak scale? Yes the system is solved with KSPSolve. There is no preconditioner yet, but I fixed the

Re: [petsc-users] DMDA Error

2020-01-24 Thread Zhang, Junchao via petsc-users
Hello, Anthony I tried petsc-3.8.4 + icc/gcc + Intel MPI 2019 update 5 + optimized/debug build, and ran with 1024 ranks, but I could not reproduce the error. Maybe you can try these: * Use the latest petsc + your test example, run with AND without -vecscatter_type mpi1, to see if they can

Re: [petsc-users] DMDA Error

2020-01-21 Thread Zhang, Junchao via petsc-users
I submitted a job and I am waiting for the result. --Junchao Zhang On Tue, Jan 21, 2020 at 3:03 AM Dave May mailto:dave.mayhe...@gmail.com>> wrote: Hi Anthony, On Tue, 21 Jan 2020 at 08:25, Anthony Jourdon mailto:jourdon_anth...@hotmail.fr>> wrote: Hello, I made a test to try to reproduce

Re: [petsc-users] DMDA Error

2020-01-16 Thread Zhang, Junchao via petsc-users
It seems the problem is triggered by DMSetUp. You can write a small test creating the DMDA with the same size as your code, to see if you can reproduce the problem. If yes, it would be much easier for us to debug it. --Junchao Zhang On Thu, Jan 16, 2020 at 7:38 AM Anthony Jourdon

Re: [petsc-users] error related to nested vector

2020-01-14 Thread Zhang, Junchao via petsc-users
Do you have a test example? --Junchao Zhang On Tue, Jan 14, 2020 at 4:44 AM Y. Shidi mailto:ys...@cam.ac.uk>> wrote: Dear developers, I have a 2x2 nested matrix and the corresponding nested vector. When I running the code with field splitting, it gets the following errors: [0]PETSC ERROR:

Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-05 Thread Zhang, Junchao via petsc-users
Fixed in https://gitlab.com/petsc/petsc/merge_requests/2262 --Junchao Zhang On Fri, Nov 1, 2019 at 6:51 PM Sajid Ali mailto:sajidsyed2...@u.northwestern.edu>> wrote: Hi Junchao/Barry, It doesn't really matter what the h5 file contains, so I'm attaching a lightly edited script of

Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-01 Thread Zhang, Junchao via petsc-users
I know nothing about Vec FFTW, but if you can provide hdf5 files in your test, I will see if I can reproduce it. --Junchao Zhang On Fri, Nov 1, 2019 at 2:08 PM Sajid Ali via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi PETSc-developers, I'm unable to debug a crash with VecDestroy

Re: [petsc-users] Errors with ParMETIS

2019-10-18 Thread Zhang, Junchao via petsc-users
Usually due to uninitialized variables. You can try valgrind. Read tutorial from page 3 of https://www.mcs.anl.gov/petsc/petsc-20/tutorial/PETSc1.pdf --Junchao Zhang On Fri, Oct 18, 2019 at 6:23 AM Shidi Yan via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Dear developers, I am using

Re: [petsc-users] CUDA-Aware MPI & PETSc

2019-10-07 Thread Zhang, Junchao via petsc-users
Hello, David, It took a longer time than I expected to add the CUDA-aware MPI feature in PETSc. It is now in PETSc-3.12, released last week. I have a little fix after that, so you better use petsc master. Use petsc option -use_gpu_aware_mpi to enable it. On Summit, you also need jsrun

Re: [petsc-users] [petsc-maint] petsc ksp solver hangs

2019-09-28 Thread Zhang, Junchao via petsc-users
Does it hang with 2 or 4 processes? Which PETSc version do you use (using the latest is easier for us to debug)? Did you configure PETSc with --with-debugging=yes COPTFLAGS="-O0 -g" CXXOPTFLAGS="-O0 -g" After attaching gdb to one process, you can use bt to see its stack trace. --Junchao

Re: [petsc-users] Clarification of INSERT_VALUES for vec with ghost nodes

2019-09-26 Thread Zhang, Junchao via petsc-users
With VecGhostUpdateBegin(v, INSERT_VALUES, SCATTER_REVERSE), the owner will get updated by ghost values. So in your case 1, proc0 gets either value1 or value2 from proc1/2; in case 2; proc0 gets either value0 or value2 from proc1/2. In short, you could not achieve your goal with INSERT_VALUES.

Re: [petsc-users] Clarification of INSERT_VALUES for vec with ghost nodes

2019-09-25 Thread Zhang, Junchao via petsc-users
On Wed, Sep 25, 2019 at 9:11 AM Aulisa, Eugenio via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, I have a vector with ghost nodes where each process may or may not change the value of a specific ghost node (using INSERT_VALUES). At the end I would like for each process, that see a

Re: [petsc-users] VecAssembly gets stuck

2019-09-13 Thread Zhang, Junchao via petsc-users
When processes get stuck, you can attach gdb to one process and back trace its call stack to see what it is doing, so we can have better understanding. --Junchao Zhang On Fri, Sep 13, 2019 at 11:31 AM José Lorenzo via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hello, I am solving a

Re: [petsc-users] CUDA-Aware MPI & PETSc

2019-08-22 Thread Zhang, Junchao via petsc-users
Definitely I will do. Thanks. --Junchao Zhang On Thu, Aug 22, 2019 at 11:34 AM David Gutzwiller mailto:david.gutzwil...@gmail.com>> wrote: Hello Junchao, Spectacular news! I have our production code running on Summit (Power9 + Nvidia V100) and on local x86 workstations, and I can definitely

Re: [petsc-users] CUDA-Aware MPI & PETSc

2019-08-22 Thread Zhang, Junchao via petsc-users
This feature is under active development. I hope I can make it usable in a couple of weeks. Thanks. --Junchao Zhang On Wed, Aug 21, 2019 at 3:21 PM David Gutzwiller via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hello, I'm currently using PETSc for the GPU acceleration of simple

Re: [petsc-users] Different behavior of code on different machines

2019-07-20 Thread Zhang, Junchao via petsc-users
Did you used the same number of MPI ranks, same build options on your pc and on cluster? If not, you can try to align options on your pc with those on your cluster to see if you can reproduce the error on your pc. You can also try valgrind to see if there are memory errors like use of

Re: [petsc-users] Different behavior of code on different machines

2019-07-20 Thread Zhang, Junchao via petsc-users
You need to test on your personal computer with multiple MPI processes (e.g., mpirun -n 2 ...) before moving to big machines. You may also need to configure petsc with --with-dedugging=1 --COPTFLAGS="-O0 -g" etc to ease debugging. --Junchao Zhang On Sat, Jul 20, 2019 at 11:03 AM Yuyun Yang via

Re: [petsc-users] VecGhostRestoreLocalForm

2019-07-20 Thread Zhang, Junchao via petsc-users
On Sat, Jul 20, 2019 at 5:47 AM José Lorenzo via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hello, I am not sure I understand the function VecGhostRestoreLocalForm. If I proceed as stated in the manual,

Re: [petsc-users] Communication during MatAssemblyEnd

2019-07-01 Thread Zhang, Junchao via petsc-users
I know the time only from my function. Anyway, in > > every case, the times between std::chronos and the PETSc log match. > > > > (The large times are in part "4b- Building offdiagonal part" or "Event > > Stage 5: Offdiag"). > > > > El vie.

Re: [petsc-users] Communication during MatAssemblyEnd

2019-06-28 Thread Zhang, Junchao via petsc-users
22:20, Smith, Barry F. > (mailto:bsm...@mcs.anl.gov>>) escribió: > > Note that this is a one time cost if the nonzero structure of the matrix > stays the same. It will not happen in future MatAssemblies. > > > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users

Re: [petsc-users] DMPlexDistributeField

2019-06-27 Thread Zhang, Junchao via petsc-users
On Thu, Jun 27, 2019 at 4:50 PM Adrian Croucher mailto:a.crouc...@auckland.ac.nz>> wrote: hi On 28/06/19 3:14 AM, Zhang, Junchao wrote: > You can dump relevant SFs to make sure their graph is correct. Yes, I'm doing that, and the graphs don't look correct. Check how the graph is created and

Re: [petsc-users] DMPlexDistributeField

2019-06-27 Thread Zhang, Junchao via petsc-users
On Wed, Jun 26, 2019 at 11:12 PM Adrian Croucher mailto:a.crouc...@auckland.ac.nz>> wrote: hi On 27/06/19 4:07 PM, Zhang, Junchao wrote: Adrian, I am working on SF but know nothing about DMPlexDistributeField. Do you think SF creation or communication is wrong? If yes, I'd like to know the

Re: [petsc-users] DMPlexDistributeField

2019-06-26 Thread Zhang, Junchao via petsc-users
On Mon, Jun 24, 2019 at 6:23 PM Adrian Croucher via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: hi Thanks Matt for the explanation about this. I have been trying a test which does the following: 1) read in DMPlex from file 2) distribute it, with overlap = 1, using

Re: [petsc-users] Communication during MatAssemblyEnd

2019-06-26 Thread Zhang, Junchao via petsc-users
>>) escribió: > > Note that this is a one time cost if the nonzero structure of the matrix > stays the same. It will not happen in future MatAssemblies. > > > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users > > mailto:petsc-users@mcs.anl.gov>&g

Re: [petsc-users] Communication during MatAssemblyEnd

2019-06-25 Thread Zhang, Junchao via petsc-users
anks both of you for your answers, > > El jue., 20 jun. 2019 a las 22:20, Smith, Barry F. > (mailto:bsm...@mcs.anl.gov>>) escribió: > > Note that this is a one time cost if the nonzero structure of the matrix > stays the same. It will not happen in future MatAssemblies. > >

Re: [petsc-users] Communication during MatAssemblyEnd

2019-06-21 Thread Zhang, Junchao via petsc-users
s.anl.gov>>) escribió: > > Note that this is a one time cost if the nonzero structure of the matrix > stays the same. It will not happen in future MatAssemblies. > > > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users > > mailto:petsc-users@mcs.anl.gov>

Re: [petsc-users] Communication during MatAssemblyEnd

2019-06-21 Thread Zhang, Junchao via petsc-users
atrix stays the same. It will not happen in future MatAssemblies. > On Jun 20, 2019, at 3:16 PM, Zhang, Junchao via petsc-users > mailto:petsc-users@mcs.anl.gov>> wrote: > > Those messages were used to build MatMult communication pattern for the > matrix. They were not part

Re: [petsc-users] Communication during MatAssemblyEnd

2019-06-20 Thread Zhang, Junchao via petsc-users
Those messages were used to build MatMult communication pattern for the matrix. They were not part of the matrix entries-passing you imagined, but indeed happened in MatAssemblyEnd. If you want to make sure processors do not set remote entries, you can use

Re: [petsc-users] Memory growth issue

2019-06-05 Thread Zhang, Junchao via petsc-users
Sanjay, You have one more reason to use VecScatter, which is heavily used and well-tested. --Junchao Zhang On Wed, Jun 5, 2019 at 5:47 PM Sanjay Govindjee mailto:s...@berkeley.edu>> wrote: I found the bug (naturally in my own code). When I made the MPI_Wait( ) changes, I missed one

Re: [petsc-users] Memory growth issue

2019-06-05 Thread Zhang, Junchao via petsc-users
OK, I see. I mistakenly read PetscMemoryGetCurrentUsage as PetscMallocGetCurrentUsage. You should also do PetscMallocGetCurrentUsage(), so that we know whether the increased memory is allocated by PETSc. On Wed, Jun 5, 2019, 9:58 AM Sanjay GOVINDJEE mailto:s...@berkeley.edu>> wrote:

Re: [petsc-users] Memory growth issue

2019-06-03 Thread Zhang, Junchao via petsc-users
On Mon, Jun 3, 2019 at 5:23 PM Stefano Zampini mailto:stefano.zamp...@gmail.com>> wrote: On Jun 4, 2019, at 1:17 AM, Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Sanjay & Barry, Sorry, I made a mistake that I said I could reproduced Sanjay's exper

Re: [petsc-users] Memory growth issue

2019-06-03 Thread Zhang, Junchao via petsc-users
Sanjay & Barry, Sorry, I made a mistake that I said I could reproduced Sanjay's experiments. I found 1) to correctly use PetscMallocGetCurrentUsage() when petsc is configured without debugging, I have to add -malloc to run the program. 2) I have to instrument the code outside of KSPSolve().

Re: [petsc-users] Memory growth issue

2019-06-01 Thread Zhang, Junchao via petsc-users
On Sat, Jun 1, 2019 at 3:21 AM Sanjay Govindjee mailto:s...@berkeley.edu>> wrote: Barry, If you look at the graphs I generated (on my Mac), you will see that OpenMPI and MPICH have very different values (along with the fact that MPICH does not seem to adhere to the standard (for releasing

Re: [petsc-users] Memory growth issue

2019-05-31 Thread Zhang, Junchao via petsc-users
On Fri, May 31, 2019 at 3:48 PM Sanjay Govindjee via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Thanks Stefano. Reading the manual pages a bit more carefully, I think I can see what I should be doing. Which should be roughly to 1. Set up target Seq vectors on PETSC_COMM_SELF 2. Use

Re: [petsc-users] Memory growth issue

2019-05-31 Thread Zhang, Junchao via petsc-users
Sanjay, I tried petsc with MPICH and OpenMPI on my Macbook. I inserted PetscMemoryGetCurrentUsage/PetscMallocGetCurrentUsage at the beginning and end of KSPSolve and then computed the delta and summed over processes. Then I tested with

Re: [petsc-users] Memory growth issue

2019-05-30 Thread Zhang, Junchao via petsc-users
Hi, Sanjay, Could you send your modified data exchange code (psetb.F) with MPI_Waitall? See other inlined comments below. Thanks. On Thu, May 30, 2019 at 1:49 PM Sanjay Govindjee via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Lawrence, Thanks for taking a look! This is what I had

Re: [petsc-users] Nonzero I-j locations

2019-05-29 Thread Zhang, Junchao via petsc-users
Yes, see MatGetRow https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetRow.html --Junchao Zhang On Wed, May 29, 2019 at 2:28 PM Manav Bhatia via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, Once a MPI-AIJ matrix has been assembled, is there a method to get the

Re: [petsc-users] How do I supply the compiler PIC flag via CFLAGS, CXXXFLAGS, and FCFLAGS

2019-05-28 Thread Zhang, Junchao via petsc-users
Also works with PathScale EKOPath Compiler Suite installed on MCS machines. $ pathcc -c check-pic.c -fPIC $ pathcc -c check-pic.c check-pic.c:2:2: error: "no-PIC" #error "no-PIC" ^ 1 error generated. --Junchao Zhang On Tue, May 28, 2019 at 1:54 PM Smith, Barry F. via petsc-users

Re: [petsc-users] Question about parallel Vectors and communicators

2019-05-13 Thread Zhang, Junchao via petsc-users
The index sets provide possible i, j in scatter "y[j] = x[i]". Each process provides a portion of the i and j of the whole scatter. The only requirement of VecScatterCreate is that on each process, local sizes of ix and iy must be equal (a process can provide empty ix and iy). A process's i

Re: [petsc-users] Question about parallel Vectors and communicators

2019-05-10 Thread Zhang, Junchao via petsc-users
Jean-Christophe, I added a petsc example at https://bitbucket.org/petsc/petsc/pull-requests/1652/add-an-example-to-show-transfer-vectors/diff#chg-src/vec/vscat/examples/ex9.c It shows how to transfer vectors from a parent communicator to vectors on a child communicator. It also shows how to

Re: [petsc-users] Command line option -memory_info

2019-05-07 Thread Zhang, Junchao via petsc-users
https://www.mcs.anl.gov/petsc/documentation/changes/37.html has PetscMemoryShowUsage() and -memory_info changed to PetscMemoryView() and -memory_view --Junchao Zhang On Tue, May 7, 2019 at 6:56 PM Sanjay Govindjee via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: I was trying to clean

Re: [petsc-users] Quick question about ISCreateGeneral

2019-04-30 Thread Zhang, Junchao via petsc-users
On Tue, Apr 30, 2019 at 11:42 AM Sajid Ali via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi PETSc Developers, I see that in the examples for ISCreateGeneral, the index sets are created by copying values from int arrays (which were created by PetscMalloc1 which is not collective).

Re: [petsc-users] questions regarding simple petsc matrix vector operation

2019-04-24 Thread Zhang, Junchao via petsc-users
How many MPI ranks do you use? The following line is suspicious. I guess you do not want a vector of global length 1. 66 VecSetSizes(b,PETSC_DECIDE,1); --Junchao Zhang On Wed, Apr 24, 2019 at 4:14 PM Karl Lin via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, there I have been

Re: [petsc-users] Preallocation of sequential matrix

2019-04-23 Thread Zhang, Junchao via petsc-users
The error message has [0]PETSC ERROR: New nonzero at (61,124) caused a malloc [0]PETSC ERROR: New nonzero at (124,186) caused a malloc You can check your code to see if you allocated spots for these nonzeros. --Junchao Zhang On Tue, Apr 23, 2019 at 8:57 PM Maahi Talukder via petsc-users

Re: [petsc-users] PetscSFReduceBegin can not handle MPI_CHAR?

2019-04-04 Thread Zhang, Junchao via petsc-users
I updated the branch and made a PR. I tried to do MPI_SUM on MPI_CHAR. We do not have UnpackAdd on this type (we are right). But unfortunately, MPICH's MPI_Reduce_local did not report errors (it should) so we did not generate an error either. --Junchao Zhang On Thu, Apr 4, 2019 at 10:37 AM

Re: [petsc-users] PetscSFReduceBegin can not handle MPI_CHAR?

2019-04-03 Thread Zhang, Junchao via petsc-users
On Wed, Apr 3, 2019, 10:29 PM Fande Kong mailto:fdkong...@gmail.com>> wrote: Thanks for the reply. It is not necessary for me to use MPI_SUM. I think the better choice is MPIU_REPLACE. Doesn’t MPIU_REPLACE work for any mpi_datatype? Yes. Fande On Apr 3, 2019, at 9:15 PM, Zhang, Junchao

Re: [petsc-users] PetscSFReduceBegin can not handle MPI_CHAR?

2019-04-03 Thread Zhang, Junchao via petsc-users
On Wed, Apr 3, 2019 at 3:41 AM Lisandro Dalcin via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: IIRC, MPI_CHAR is for ASCII text data. Also, remember that in C the signedness of plain `char` is implementation (or platform?) dependent. I'm not sure MPI_Reduce() is supposed to / should

Re: [petsc-users] MPI Communication times

2019-03-23 Thread Zhang, Junchao via petsc-users
on pattern to see if it is the case. --Junchao Zhang On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera mailto:mvaler...@sdsu.edu>> wrote: Thanks for your answer, so for example i have

Re: [petsc-users] MPI Communication times

2019-03-22 Thread Zhang, Junchao via petsc-users
Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera mailto:mvaler...@sdsu.edu>> wrote: Thanks for your answer, so for example i have a log for 2

Re: [petsc-users] MPI Communication times

2019-03-22 Thread Zhang, Junchao via petsc-users
ften happens thanks to locality, then the memory copy time is counted in VecScatter. You can analyze your code's communication pattern to see if it is the case. --Junchao Zhang On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: On Wed,

Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-21 Thread Zhang, Junchao via petsc-users
On Thu, Mar 21, 2019 at 1:57 PM Derek Gaston via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: It sounds like you already tracked this down... but for completeness here is what track-origins gives: ==262923== Conditional jump or move depends on uninitialised value(s) ==262923==at

Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-21 Thread Zhang, Junchao via petsc-users
s ago > > https://bitbucket.org/petsc/petsc/commits/c3caad8634d376283f7053f3b388606b45b3122c > > Maybe this will fix your problem too? > > Stefano > > > Il Gio 21 Mar 2019, 04:21 Zhang, Junchao via petsc-users < > petsc-users@mcs.anl.gov<mailto:petsc-users@m

Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-21 Thread Zhang, Junchao via petsc-users
https://bitbucket.org/petsc/petsc/commits/c3caad8634d376283f7053f3b388606b45b3122c Maybe this will fix your problem too? Stefano Il Gio 21 Mar 2019, 04:21 Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> ha scritto: Hi, Derek, Try to apply this tiny (but dirty) patch on

Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-20 Thread Zhang, Junchao via petsc-users
Hi, Derek, Try to apply this tiny (but dirty) patch on your version of PETSc to disable the VecScatterMemcpyPlan optimization to see if it helps. Thanks. --Junchao Zhang On Wed, Mar 20, 2019 at 6:33 PM Junchao Zhang mailto:jczh...@mcs.anl.gov>> wrote: Did you see the warning with small

Re: [petsc-users] Valgrind Issue With Ghosted Vectors

2019-03-20 Thread Zhang, Junchao via petsc-users
Did you see the warning with small scale runs? Is it possible to provide a test code? You mentioned "changing PETSc now would be pretty painful". Is it because it will affect your performance (but not your code)? If yes, could you try PETSc master and run you code with or without

Re: [petsc-users] MPI Communication times

2019-03-20 Thread Zhang, Junchao via petsc-users
pattern to see if it is the case. --Junchao Zhang On Wed, Mar 20, 2019 at 4:44 PM Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera mailto:mvaler...@sdsu.edu>> wrote: Thanks for your answer, so for example i have a

Re: [petsc-users] MPI Communication times

2019-03-20 Thread Zhang, Junchao via petsc-users
On Wed, Mar 20, 2019 at 4:18 PM Manuel Valera mailto:mvaler...@sdsu.edu>> wrote: Thanks for your answer, so for example i have a log for 200 cores across 10 nodes that reads:

Re: [petsc-users] MPI Communication times

2019-03-20 Thread Zhang, Junchao via petsc-users
See the "Mess AvgLen Reduct" number in each log stage. Mess is the total number of messages sent in an event over all processes. AvgLen is average message len. Reduct is the number of global reduction. Each event like VecScatterBegin/End has a maximal execution time over all processes, and

Re: [petsc-users] PCFieldSplit with MatNest

2019-03-13 Thread Zhang, Junchao via petsc-users
Manuel, Could you try to add this line sbaij->free_imax_ilen = PETSC_TRUE; after line 2431 in /opt/PETSc_library/petsc-3.10.4/src/mat/impls/sbaij/seq/sbaij.c PS: Matt, this bug looks unrelated to my VecRestoreArrayRead_Nest fix. --Junchao Zhang On Wed, Mar 13, 2019 at 9:05 AM Matthew

Re: [petsc-users] PetscScatterCreate type mismatch after update.

2019-03-12 Thread Zhang, Junchao via petsc-users
Maybe you should delete your PETSC_ARCH directory and recompile it? I tested my branch. It should not that easily fail :) --Junchao Zhang On Tue, Mar 12, 2019 at 8:20 PM Manuel Valera mailto:mvaler...@sdsu.edu>> wrote: Hi Mr Zhang, thanks for your reply, I just checked your branch out,

Re: [petsc-users] PetscScatterCreate type mismatch after update.

2019-03-12 Thread Zhang, Junchao via petsc-users
Manuel, I was working on a branch to revert the VecScatterCreate to VecScatterCreateWithData change. The change broke PETSc API and I think we do not need it. I had planed to do a pull request after my another PR is merged. But since it already affects you, you can try this branch now, which is

Re: [petsc-users] PCFieldSplit with MatNest

2019-03-12 Thread Zhang, Junchao via petsc-users
Hi, Manuel, I recently fixed a problem in VecRestoreArrayRead. Basically, I added VecRestoreArrayRead_Nest. Could you try the master branch of PETSc to see if it fixes your problem? Thanks. --Junchao Zhang On Mon, Mar 11, 2019 at 6:56 AM Manuel Colera Rico via petsc-users

Re: [petsc-users] Compute the sum of the absolute values of the off-block diagonal entries of each row

2019-03-04 Thread Zhang, Junchao via petsc-users
On Mon, Mar 4, 2019 at 10:39 AM Matthew Knepley via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: On Mon, Mar 4, 2019 at 11:28 AM Cyrill Vonplanta via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Dear Petsc Users, I am trying to implement a variant of the $l^1$-Gauss-Seidel

Re: [petsc-users] Problem in loading Matrix Market format

2019-02-28 Thread Zhang, Junchao via petsc-users
Eda, An update to ex72 is merged to PETSc master branch just now. It now can read matrices either symmetric or non-symmetric in Matrix Market format, and output a petsc binary matrix in MATSBAIJ format (for symmetric) or MATAIJ format (for non-symmetric). See help in source code for usage.

Re: [petsc-users] AddressSanitizer: attempting free on address which was not malloc()-ed

2019-02-27 Thread Zhang, Junchao via petsc-users
Try the following to see if you can catch the bug easily: 1) Get error code for each petsc function and check it with CHKERRQ; 2) Link your code with a petsc library with debugging enabled (configured with --with-debugging=1); 3) Run your code with valgrind --Junchao Zhang On Wed, Feb 27,

Re: [petsc-users] AddressSanitizer: attempting free on address which was not malloc()-ed

2019-02-27 Thread Zhang, Junchao via petsc-users
Could you provide a compilable and runnable test so I can try it? --Junchao Zhang On Wed, Feb 27, 2019 at 7:34 PM Yuyun Yang mailto:yyan...@stanford.edu>> wrote: Thanks, I fixed that, but I’m not actually calling the testScatters() function in my implementation (in the constructor, the only

Re: [petsc-users] Direct PETSc to use MCDRAM on KNL and other optimizations for KNL

2019-02-27 Thread Zhang, Junchao via petsc-users
On Wed, Feb 27, 2019 at 7:03 PM Sajid Ali mailto:sajidsyed2...@u.northwestern.edu>> wrote: Hi Junchao, I’m confused with the syntax. If I submit the following as my job script, I get an error : #!/bin/bash #SBATCH --job-name=petsc_test #SBATCH -N 1 #SBATCH -C knl,quad,flat #SBATCH -p

Re: [petsc-users] Direct PETSc to use MCDRAM on KNL and other optimizations for KNL

2019-02-27 Thread Zhang, Junchao via petsc-users
Use srun numactl -m 1 ./app OR srun numactl -p 1 ./app See bottom of https://www.nersc.gov/users/computational-systems/cori/configuration/knl-processor-modes/ --Junchao Zhang On Wed, Feb 27, 2019 at 4:16 PM Sajid Ali via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, I ran a TS

Re: [petsc-users] AddressSanitizer: attempting free on address which was not malloc()-ed

2019-02-27 Thread Zhang, Junchao via petsc-users
On Wed, Feb 27, 2019 at 10:41 AM Yuyun Yang via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: I called VecDestroy() in the destructor for this object – is that not the right way to do it? In Domain::testScatters(), you have many VecDuplicate(,), You need to VecDestroy() before doing new

Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Zhang, Junchao via petsc-users
Sure. --Junchao Zhang On Tue, Feb 12, 2019 at 9:47 AM Matthew Knepley mailto:knep...@gmail.com>> wrote: Hi Junchao, Could you fix the MM example in PETSc to have this full support? That way we will always have it. Thanks, Matt On Tue, Feb 12, 2019 at 10:27 AM Zhang, Junchao via

Re: [petsc-users] Problem in loading Matrix Market format

2019-02-12 Thread Zhang, Junchao via petsc-users
Eda, I have a code that can read in Matrix Market and write out PETSc binary files. Usage: mpirun -n 1 ./mm2petsc -fin -fout . You can have a try. --Junchao Zhang On Tue, Feb 12, 2019 at 1:50 AM Eda Oktay via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hello, I am trying to

Re: [petsc-users] Slow linear solver via MUMPS

2019-01-26 Thread Zhang, Junchao via petsc-users
On Fri, Jan 25, 2019 at 8:07 PM Mohammad Gohardoust via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, I am trying to modify a "pure MPI" code for solving water movement equation in soils which employs KSP iterative solvers. This code gets really slow in the hpc I am testing it as

Re: [petsc-users] MPI Iterative solver crash on HPC

2019-01-25 Thread Zhang, Junchao via petsc-users
-mattransposematmult_via scalable' Hong On Fri, Jan 11, 2019 at 9:52 AM Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: I saw the following error message in your first email. [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an obj

Re: [petsc-users] MPI Iterative solver crash on HPC

2019-01-17 Thread Zhang, Junchao via petsc-users
11, 2019 at 10:34 AM Zhang, Hong via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Add option '-mattransposematmult_via scalable' Hong On Fri, Jan 11, 2019 at 9:52 AM Zhang, Junchao via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: I saw the following error message in

Re: [petsc-users] MPI Iterative solver crash on HPC

2019-01-11 Thread Zhang, Junchao via petsc-users
I saw the following error message in your first email. [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. Probably the matrix is too large. You can try with more compute nodes,

Re: [petsc-users] Dynamically resize the existing PetscVector

2018-12-17 Thread Zhang, Junchao via petsc-users
Or, you can have your own array and then create PETSc vectors with VecCreateGhostWithArray, so that the memory resizing is managed by yourself. --Junchao Zhang On Mon, Dec 17, 2018 at 9:15 AM Shidi Yan via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hello, I am working on adaptive

Re: [petsc-users] MUMPS Error

2018-12-12 Thread Zhang, Junchao via petsc-users
On Wed, Dec 12, 2018 at 7:14 AM Sal Am via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi I am getting an error using MUMPS. How I run it: bash-4.2$ mpiexec -n 16 ./solveCSys -ksp_type richardson -pc_type lu -pc_factor_mat_solver_type mumps -ksp_max_it 1 -ksp_monitor_true_residual

Re: [petsc-users] Compile petsc using intel mpi

2018-11-27 Thread Zhang, Junchao via petsc-users
On Tue, Nov 27, 2018 at 5:25 AM Edoardo alinovi via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Dear users, I have installed intel parallel studio on my workstation and thus I would like to take advantage of intel compiler. Before messing up my installation, have you got some