I see. No problem I was just curious. A work around for me is to put the result of MatMatMult onto a temporary matrix and use MatCopy to copy it on C with the correct local rows.
Marius,
The reason this is happening is because the routine MatMatMultSymbolic_MPIDense_MPIDense() works by
this give a seg. fault.
What happens if you try to preallocate C matrix (in the same way as A and B) and use MatMatMult with MAT_REUSE_MATRIX?
Hong (Mr.)
On Mar 5, 2019, at 6:19 PM, Marius Buerkle via petsc-users wrote:
Hi,
I have a question regardin
Myriam, in your first message, there was a significant (about 50%)
increase in memory consumption already on 4 cores. Before attacking
scaling, it may be useful to trace memory usage for that base case.
Even better if you can reduce to one process. Anyway, I would start by
running both cases with
What happens if you try to preallocate C matrix (in the same way as A and B)
and use MatMatMult with MAT_REUSE_MATRIX?
Hong (Mr.)
On Mar 5, 2019, at 6:19 PM, Marius Buerkle via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi,
I have a question regarding MatMatMult for MPIDENSE matri
Myriam,
Sorry we have not been able to resolve this problem with memory scaling yet.
The best tool to determine the change in a code that results in large
differences in a program's run is git bisect. Basically you tell git bisect
the git commit of the code that is "good" and the gi
> On Mar 4, 2019, at 7:03 AM, Klaus Burkart via petsc-users
> wrote:
>
> Hello,
>
> I want to solve many symmetric linear systems one after another in parallel
> using boomerAMG + KSPCG and need to make the matrix transfer more efficient.
> Matrices are symmetric in structure and values.
Marius,
The reason this is happening is because the routine
MatMatMultSymbolic_MPIDense_MPIDense() works by converting the matrix to
elemental format, doing the product and then converting back. Elemental format
has some block cyclic storage format and so the row ownership knowledge is
Hi,
I have a question regarding MatMatMult for MPIDENSE matrices. I have two dense matrices A and B for which I set the number up the number of local rows each processor owns manually (same for A and B) when creating them with MatCreateDense (which is different from what PETSC_DECIDE what do).
I used PCView to display the size of the linear system in each level of
the MG. You'll find the outputs attached to this mail (zip file) for
both the default threshold value and a value of 0.1, and for both 3.6
and 3.10 PETSc versions.
For convenience, I summarized the information in a graph, also
Of course, just as you would run any other MPI application.
GangLu via petsc-users writes:
> Hi all,
>
> When installing petsc, there is a stream test that is quite useful.
>
> Is it possible to run such test in batch mode, e.g. using pbs script?
>
> Thanks.
>
> cheers,
>
> Gang
Hi all,
When installing petsc, there is a stream test that is quite useful.
Is it possible to run such test in batch mode, e.g. using pbs script?
Thanks.
cheers,
Gang
Hi Matt,
I plotted the memory scalings using different threshold values. The two
scalings are slightly translated (from -22 to -88 mB) but this gain is
neglectable. The 3.6-scaling keeps being robust while the 3.10-scaling
deteriorates.
Do you have any other suggestion?
Thanks
Myriam
Le 03/02/
Yes, this does the trick for me. Thanks.
Thx Cyrill
> On 5 Mar 2019, at 00:10, Smith, Barry F. wrote:
>
>
> How about something like,
>
> MatMPIAIJGetSeqAIJ(A,NULL,&Ao,NULL);
>
>> MatGetOwnershipRange(A, &rS, &rE);
>> for (r = 0; r < rE-rS; ++r) {
>> sum = 0.0;
>> MatGetRow(Ao, r, &nco
13 matches
Mail list logo