Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-16 Thread Hapla Vaclav via petsc-dev



> On 16 Jan 2019, at 15:22, Jed Brown via petsc-dev  
> wrote:
> 
> Matthew Knepley  writes:
> 
>> On Wed, Jan 16, 2019 at 9:01 AM Jed Brown via petsc-dev <
>> petsc-dev@mcs.anl.gov> wrote:
>> 
>>> Pierre Jolivet via petsc-dev  writes:
>>> 
 OK, I was wrong about MATAIJ, as Jed already pointed out.
 What about BAIJ or Dense matrices?
>>> 
>>> BAIJ (and SBAIJ) is handled by MatXAIJSetPreallocation.
>>> 
>> 
>> Dense matrices don't need preallocation.
> 
> True, MatMPIDenseSetPreallocation is more like a *PlaceArray or
> Create*WithArray.

... and this is why I think the name is wrong ...

> The caller needs to be aware of parallelism to create
> and use arrays; I guess Pierre does that but doesn't want the noise at
> the particular call site (though at least the if statements aren't
> needed).



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-16 Thread Pierre Jolivet via petsc-dev



> On 16 Jan 2019, at 3:22 PM, Jed Brown  wrote:
> 
> Matthew Knepley  writes:
> 
>> On Wed, Jan 16, 2019 at 9:01 AM Jed Brown via petsc-dev <
>> petsc-dev@mcs.anl.gov> wrote:
>> 
>>> Pierre Jolivet via petsc-dev  writes:
>>> 
 OK, I was wrong about MATAIJ, as Jed already pointed out.
 What about BAIJ or Dense matrices?
>>> 
>>> BAIJ (and SBAIJ) is handled by MatXAIJSetPreallocation.
>>> 
>> 
>> Dense matrices don't need preallocation.
> 
> True, MatMPIDenseSetPreallocation is more like a *PlaceArray or
> Create*WithArray.  The caller needs to be aware of parallelism to create
> and use arrays; I guess Pierre does that but doesn't want the noise at
> the particular call site (though at least the if statements aren't
> needed).

That is exactly what I’m doing, and what I would want. But I can live with a 
couple of if—else statements (should this never get fixed).



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-16 Thread Jed Brown via petsc-dev
Matthew Knepley  writes:

> On Wed, Jan 16, 2019 at 9:01 AM Jed Brown via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> Pierre Jolivet via petsc-dev  writes:
>>
>> > OK, I was wrong about MATAIJ, as Jed already pointed out.
>> > What about BAIJ or Dense matrices?
>>
>> BAIJ (and SBAIJ) is handled by MatXAIJSetPreallocation.
>>
>
> Dense matrices don't need preallocation.

True, MatMPIDenseSetPreallocation is more like a *PlaceArray or
Create*WithArray.  The caller needs to be aware of parallelism to create
and use arrays; I guess Pierre does that but doesn't want the noise at
the particular call site (though at least the if statements aren't
needed).


Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-16 Thread Jed Brown via petsc-dev
Pierre Jolivet via petsc-dev  writes:

> OK, I was wrong about MATAIJ, as Jed already pointed out.
> What about BAIJ or Dense matrices?

BAIJ (and SBAIJ) is handled by MatXAIJSetPreallocation.


Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Smith, Barry F. via petsc-dev

   I concur that there should be a VecCreateWithArray() to automatically uses 
Seq vector for one process and MPI for multiple.

   Barry


> On Jan 15, 2019, at 3:01 AM, Dave May via petsc-dev  
> wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 08:50, Pierre Jolivet  
> wrote:
> OK, I was wrong about MATAIJ, as Jed already pointed out.
> What about BAIJ or Dense matrices?
> 
> The preallocation methods for BAIJ and Dense both internally use 
> PetscTryMethod.
> 
>  
> What about VecCreateMPIWithArray which seems to explicitly call 
> VecCreate_MPI_Private which explicitly sets the type to VECMPI 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
>  so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?
> 
> That looks problematic. 
> 
> Possibly there should either be an if statement in VecCreateMPIWithArray() 
> associated with the comm size, or there should be a new API 
> VecCreateWithArray() with the same args as VecCreateMPIWithArray.
> 
> As a work around, you could add VecCreateWithArray() in your code base which 
> does the right thing. 
> 
>  
> 
> Thanks,
> Pierre  
> 
>> On 15 Jan 2019, at 9:40 AM, Dave May  wrote:
>> 
>> 
>> 
>> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
>>  wrote:
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Actually this is not the case.
>> 
>> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
>> Seq or MPI is done internally to MatCreate and is not required in the user 
>> code. Additionally, in your preallocation routine, you can call safely both 
>> (without your comm_size if statement)
>> MatSeqAIJSetPreallocation()
>> and
>> MatMPIAIJSetPreallocation()
>> If the matrix type matches that expected by the API, then it gets executed. 
>> Otherwise nothing happens.
>> 
>> This is done all over the place to enable the matrix type to be a run-time 
>> choice.
>> 
>> For example, see here
>> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>> and look at lines 1511 and 1512. 
>> 
>> Thanks,
>>   Dave
>> 
>> 
>> 
>>  
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>>  wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Pierre Jolivet via petsc-dev

> On 15 Jan 2019, at 10:01 AM, Dave May  wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 08:50, Pierre Jolivet  > wrote:
> OK, I was wrong about MATAIJ, as Jed already pointed out.
> What about BAIJ or Dense matrices?
> 
> The preallocation methods for BAIJ and Dense both internally use 
> PetscTryMethod.

I don’t see any MatDenseSetPreallocation in master, what are you referring to 
please?

>  
> What about VecCreateMPIWithArray which seems to explicitly call 
> VecCreate_MPI_Private which explicitly sets the type to VECMPI 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
>  
> 
>  so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?
> 
> That looks problematic. 
> 
> Possibly there should either be an if statement in VecCreateMPIWithArray() 
> associated with the comm size, or there should be a new API 
> VecCreateWithArray() with the same args as VecCreateMPIWithArray.
> 
> As a work around, you could add VecCreateWithArray() in your code base which 
> does the right thing. 

Sure, I can find a workaround in my code, but I’m still thinking it is best not 
to have PETSc segfaults when a user is doing something they are allowed to do :)

Thanks,
Pierre

>  
> 
> Thanks,
> Pierre  
> 
>> On 15 Jan 2019, at 9:40 AM, Dave May > > wrote:
>> 
>> 
>> 
>> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Actually this is not the case.
>> 
>> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
>> Seq or MPI is done internally to MatCreate and is not required in the user 
>> code. Additionally, in your preallocation routine, you can call safely both 
>> (without your comm_size if statement)
>> MatSeqAIJSetPreallocation()
>> and
>> MatMPIAIJSetPreallocation()
>> If the matrix type matches that expected by the API, then it gets executed. 
>> Otherwise nothing happens.
>> 
>> This is done all over the place to enable the matrix type to be a run-time 
>> choice.
>> 
>> For example, see here
>> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>>  
>> 
>> and look at lines 1511 and 1512. 
>> 
>> Thanks,
>>   Dave
>> 
>> 
>> 
>>  
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong >> > wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Pierre Jolivet via petsc-dev
OK, I was wrong about MATAIJ, as Jed already pointed out.
What about BAIJ or Dense matrices?
What about VecCreateMPIWithArray which seems to explicitly call 
VecCreate_MPI_Private which explicitly sets the type to VECMPI 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
 

 so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?

Thanks,
Pierre  

> On 15 Jan 2019, at 9:40 AM, Dave May  wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
> that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested would 
> mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation 
> else MatMPIXSetPreallocation in the rest of my code, which is something I 
> would rather avoid.
> 
> Actually this is not the case.
> 
> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
> Seq or MPI is done internally to MatCreate and is not required in the user 
> code. Additionally, in your preallocation routine, you can call safely both 
> (without your comm_size if statement)
> MatSeqAIJSetPreallocation()
> and
> MatMPIAIJSetPreallocation()
> If the matrix type matches that expected by the API, then it gets executed. 
> Otherwise nothing happens.
> 
> This is done all over the place to enable the matrix type to be a run-time 
> choice.
> 
> For example, see here
> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>  
> 
> and look at lines 1511 and 1512. 
> 
> Thanks,
>   Dave
> 
> 
> 
>  
> 
> Thanks,
> Pierre
> 
>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong > > wrote:
>> 
>> Replace 
>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>> to
>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>> 
>> Replace 
>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>> to
>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>> 
>> Then add
>> MatSeqAIJSetPreallocation()
>> MatSeqDenseSetPreallocation()
>> 
>> Hong
>> 
>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
>> run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)
>> 
>> Thanks in advance,
>> Pierre
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Dave May via petsc-dev
On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:

> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense,
> but that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested
> would mean that I need to add a bunch of if(comm_size == 1)
> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my
> code, which is something I would rather avoid.
>

Actually this is not the case.

If you do as Hong suggests and use MATAIJ then the switch for comm_size for
Seq or MPI is done internally to MatCreate and is not required in the user
code. Additionally, in your preallocation routine, you can call safely both
(without your comm_size if statement)
MatSeqAIJSetPreallocation()
and
MatMPIAIJSetPreallocation()
If the matrix type matches that expected by the API, then it gets executed.
Otherwise nothing happens.

This is done all over the place to enable the matrix type to be a run-time
choice.

For example, see here
https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
and look at lines 1511 and 1512.

Thanks,
  Dave





>
> Thanks,
> Pierre
>
> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>
> Replace
> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
> to
> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>
> Replace
> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
> to
> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>
> Then add
> MatSeqAIJSetPreallocation()
> MatSeqDenseSetPreallocation()
>
> Hong
>
> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and
>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program
>> could run _with a single_ process? (I know, I could switch to
>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>
>> Thanks in advance,
>> Pierre
>>
>>
>


Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Pierre Jolivet via petsc-dev

> On 15 Jan 2019, at 6:26 AM, Jed Brown  wrote:
> 
> We should repair the MPI matrix implementations so that this works on 
> communicators of size 1

Great, I was worried that I missed somewhere that it is explicitly stated that 
you should not use MPI types on communicators of size 1.

> but why can't you use MatXAIJSetPreallocation().

OK, replace my if then else by if(comm_size == 1) VecCreateSeqWithArray else 
VecCreateMPIWithArray
There is also no MatXBAIJSetPreallocationCSR or MatXDenseSetPreallocation, so 
that's other if then elses.

Or am I missing something?

Thanks,
Pierre

> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html
>  
> 
> 
> Pierre Jolivet via petsc-dev  > writes:
> 
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov> 
>>> >> wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Jed Brown via petsc-dev
We should repair the MPI matrix implementations so that this works on 
communicators of size 1, but why can't you use MatXAIJSetPreallocation().

https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html

Pierre Jolivet via petsc-dev  writes:

> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
> that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested would 
> mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation 
> else MatMPIXSetPreallocation in the rest of my code, which is something I 
> would rather avoid.
>
> Thanks,
> Pierre
>
>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>> 
>> Replace 
>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>> to
>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>> 
>> Replace 
>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>> to
>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>> 
>> Then add
>> MatSeqAIJSetPreallocation()
>> MatSeqDenseSetPreallocation()
>> 
>> Hong
>> 
>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
>> run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)
>> 
>> Thanks in advance,
>> Pierre
>> 


Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Pierre Jolivet via petsc-dev
Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
that is not an option I have right now)”
All my Mat are of type MATMPIX. Switching to MATX here as you suggested would 
mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation 
else MatMPIXSetPreallocation in the rest of my code, which is something I would 
rather avoid.

Thanks,
Pierre

> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
> 
> Replace 
> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
> to
> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
> 
> Replace 
> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
> to
> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
> 
> Then add
> MatSeqAIJSetPreallocation()
> MatSeqDenseSetPreallocation()
> 
> Hong
> 
> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
> run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
> that is not an option I have right now)
> 
> Thanks in advance,
> Pierre
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Zhang, Hong via petsc-dev
Replace
ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
to
ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);

Replace
ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
to
ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);

Then add
MatSeqAIJSetPreallocation()
MatSeqDenseSetPreallocation()

Hong

On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
mailto:petsc-dev@mcs.anl.gov>> wrote:
Hello,
Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
that is not an option I have right now)

Thanks in advance,
Pierre