Re: [petsc-users] MatMatMult densemat

2019-03-05 Thread Marius Buerkle via petsc-users
I see. No problem I was just curious. A work around for me is to put the result of MatMatMult onto a temporary matrix and use MatCopy to copy it on C with the correct local rows.      Marius, The reason this is happening is because the routine MatMatMultSymbolic_MPIDense_MPIDense() works by

Re: [petsc-users] MatMatMult densemat

2019-03-05 Thread Marius Buerkle via petsc-users
this give a seg. fault.    What happens if you try to preallocate C matrix (in the same way as A and B) and use MatMatMult with MAT_REUSE_MATRIX?   Hong (Mr.)   On Mar 5, 2019, at 6:19 PM, Marius Buerkle via petsc-users wrote:   Hi,   I have a question regardin

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-05 Thread Jed Brown via petsc-users
Myriam, in your first message, there was a significant (about 50%) increase in memory consumption already on 4 cores. Before attacking scaling, it may be useful to trace memory usage for that base case. Even better if you can reduce to one process. Anyway, I would start by running both cases with

Re: [petsc-users] MatMatMult densemat

2019-03-05 Thread Zhang, Hong via petsc-users
What happens if you try to preallocate C matrix (in the same way as A and B) and use MatMatMult with MAT_REUSE_MATRIX? Hong (Mr.) On Mar 5, 2019, at 6:19 PM, Marius Buerkle via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi, I have a question regarding MatMatMult for MPIDENSE matri

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-05 Thread Smith, Barry F. via petsc-users
Myriam, Sorry we have not been able to resolve this problem with memory scaling yet. The best tool to determine the change in a code that results in large differences in a program's run is git bisect. Basically you tell git bisect the git commit of the code that is "good" and the gi

Re: [petsc-users] Loading only upper + MatSetOption(A,MAT_SYMMETRIC,PETSC_TRUE);

2019-03-05 Thread Smith, Barry F. via petsc-users
> On Mar 4, 2019, at 7:03 AM, Klaus Burkart via petsc-users > wrote: > > Hello, > > I want to solve many symmetric linear systems one after another in parallel > using boomerAMG + KSPCG and need to make the matrix transfer more efficient. > Matrices are symmetric in structure and values.

Re: [petsc-users] MatMatMult densemat

2019-03-05 Thread Smith, Barry F. via petsc-users
Marius, The reason this is happening is because the routine MatMatMultSymbolic_MPIDense_MPIDense() works by converting the matrix to elemental format, doing the product and then converting back. Elemental format has some block cyclic storage format and so the row ownership knowledge is

[petsc-users] MatMatMult densemat

2019-03-05 Thread Marius Buerkle via petsc-users
Hi,   I have a question regarding MatMatMult for MPIDENSE matrices. I have two dense matrices A and B for which I set the number up the number of local rows each processor owns manually (same for A and B) when creating them with MatCreateDense (which is different from what PETSC_DECIDE what do).

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-05 Thread Myriam Peyrounette via petsc-users
I used PCView to display the size of the linear system in each level of the MG. You'll find the outputs attached to this mail (zip file) for both the default threshold value and a value of 0.1, and for both 3.6 and 3.10 PETSc versions. For convenience, I summarized the information in a graph, also

Re: [petsc-users] streams test on hpc

2019-03-05 Thread Jed Brown via petsc-users
Of course, just as you would run any other MPI application. GangLu via petsc-users writes: > Hi all, > > When installing petsc, there is a stream test that is quite useful. > > Is it possible to run such test in batch mode, e.g. using pbs script? > > Thanks. > > cheers, > > Gang

[petsc-users] streams test on hpc

2019-03-05 Thread GangLu via petsc-users
Hi all, When installing petsc, there is a stream test that is quite useful. Is it possible to run such test in batch mode, e.g. using pbs script? Thanks. cheers, Gang

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-03-05 Thread Myriam Peyrounette via petsc-users
Hi Matt, I plotted the memory scalings using different threshold values. The two scalings are slightly translated (from -22 to -88 mB) but this gain is neglectable. The 3.6-scaling keeps being robust while the 3.10-scaling deteriorates. Do you have any other suggestion? Thanks Myriam Le 03/02/

Re: [petsc-users] Compute the sum of the absolute values of the off-block diagonal entries of each row

2019-03-05 Thread Cyrill Vonplanta via petsc-users
Yes, this does the trick for me. Thanks. Thx Cyrill > On 5 Mar 2019, at 00:10, Smith, Barry F. wrote: > > > How about something like, > > MatMPIAIJGetSeqAIJ(A,NULL,&Ao,NULL); > >> MatGetOwnershipRange(A, &rS, &rE); >> for (r = 0; r < rE-rS; ++r) { >> sum = 0.0; >> MatGetRow(Ao, r, &nco