Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Dave May
On Thu, 16 Aug 2018 at 04:44, Manuel Valera  wrote:

> Thanks Matthew and Barry,
>
> Now my code looks like:
>
> call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)
>
> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>>
> call DMCreateMatrix(daDummy,A,ierr)
>> call MatSetFromOptions(A,ierr)
>
> call MatSetUp(A,ierr)
>> [...]
>> call
>> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>> [...]
>> call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
>> call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)
>
>
> And i get a different error, now is:
>
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Argument out of range
> [0]PETSC ERROR: Column too large: col 10980 max 124
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
> Date: 2018-05-31 17:31:13 +0300
> [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
> Aug 15 19:40:00 2018
> [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 442 in
> /home/valera/petsc/src/mat/impls/aij/seq/aij.c
> [0]PETSC ERROR: #2 MatSetValues() line 1339 in
> /home/valera/petsc/src/mat/interface/matrix.c
>
>
> Thanks again,
>


This error has nothing to do with matrix type being used. The size of the
matrix is defined by the particular DM. You should be using DM associated
data/APIs to set values in the matrix.

It's not obvious how the args here

  call MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INS
ERT_VALUES,ier

actually relate to the DM. Code snippets aren't helpful in this case to
understand the error.

I suggest you send a complete example illustrating your actual problem.

Thanks,
  Dave




>
>
>
>
>
>
>
> On Wed, Aug 15, 2018 at 7:02 PM, Smith, Barry F. 
> wrote:
>
>>
>>   Should be
>>
>> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>> call DMCreateMatrix(daDummy,A,ierr)
>>
>>   and remove the rest. You need to set the type of Mat you want the DM to
>> return BEFORE you create the matrix.
>>
>>   Barry
>>
>>
>>
>> > On Aug 15, 2018, at 4:45 PM, Manuel Valera  wrote:
>> >
>> > Ok thanks for clarifying that, i wasn't sure if there were different
>> types,
>> >
>> > Here is a stripped down version of my code, it seems like the
>> preallocation is working now since the matrix population part is working
>> without problem, but here it is for illustration purposes:
>> >
>> > call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)
>> > call DMCreateMatrix(daDummy,A,ierr)
>> > call MatSetFromOptions(A,ierr)
>> > call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>> > call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>> > call
>> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
>> > call MatSetUp(A,ierr)
>> > [...]
>> > call
>> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>> > [...]
>> > call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
>> > call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)
>> >
>> > Adding the first line there did the trick,
>> >
>> > Now the problem seems to be the program is not recognizing the matrix
>> as ViennaCL type when i try with more than one processor, i get now:
>> >
>> > [0]PETSC ERROR: - Error Message
>> --
>> > [0]PETSC ERROR: No support for this operation for this object type
>> > [0]PETSC ERROR: Currently only handles ViennaCL matrices
>> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> > [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53
>> GIT Date: 2018-05-31 17:31:13 +0300
>> > [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
>> Aug 15 14:44:22 2018
>> > [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
>> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
>> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
>> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
>> > [0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
>> /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
>> > [0]PETSC ERROR: #2 PCSetUp() line 932 in
>> /home/valera/petsc/src/ksp/pc/interface/precon.c
>> > [0]PETSC ERROR: #3 KSPSetUp() line 381 in
>> /home/valera/petsc/src/ksp/ksp/interface/itfunc.c
>> >
>> > When running with:
>> >
>> > mpirun -n 1 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Manuel Valera
Thanks Matthew and Barry,

Now my code looks like:

call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)

call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>
call DMCreateMatrix(daDummy,A,ierr)
> call MatSetFromOptions(A,ierr)

call MatSetUp(A,ierr)
> [...]
> call MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),
> INSERT_VALUES,ierr)
> [...]
> call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
> call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)


And i get a different error, now is:

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Argument out of range
[0]PETSC ERROR: Column too large: col 10980 max 124
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
Date: 2018-05-31 17:31:13 +0300
[0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 19:40:00 2018
[0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
[0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 442 in
/home/valera/petsc/src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: #2 MatSetValues() line 1339 in
/home/valera/petsc/src/mat/interface/matrix.c


Thanks again,








On Wed, Aug 15, 2018 at 7:02 PM, Smith, Barry F.  wrote:

>
>   Should be
>
> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
> call DMCreateMatrix(daDummy,A,ierr)
>
>   and remove the rest. You need to set the type of Mat you want the DM to
> return BEFORE you create the matrix.
>
>   Barry
>
>
>
> > On Aug 15, 2018, at 4:45 PM, Manuel Valera  wrote:
> >
> > Ok thanks for clarifying that, i wasn't sure if there were different
> types,
> >
> > Here is a stripped down version of my code, it seems like the
> preallocation is working now since the matrix population part is working
> without problem, but here it is for illustration purposes:
> >
> > call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)
> > call DMCreateMatrix(daDummy,A,ierr)
> > call MatSetFromOptions(A,ierr)
> > call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> > call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
> > call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
> PETSC_NULL_INTEGER,ierr)
> > call MatSetUp(A,ierr)
> > [...]
> > call MatSetValues(A,1,row,sumpos,
> pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
> > [...]
> > call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
> > call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)
> >
> > Adding the first line there did the trick,
> >
> > Now the problem seems to be the program is not recognizing the matrix as
> ViennaCL type when i try with more than one processor, i get now:
> >
> > [0]PETSC ERROR: - Error Message
> --
> > [0]PETSC ERROR: No support for this operation for this object type
> > [0]PETSC ERROR: Currently only handles ViennaCL matrices
> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
> Date: 2018-05-31 17:31:13 +0300
> > [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
> Aug 15 14:44:22 2018
> > [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64
> --download-viennacl
> > [0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
> /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
> > [0]PETSC ERROR: #2 PCSetUp() line 932 in /home/valera/petsc/src/ksp/pc/
> interface/precon.c
> > [0]PETSC ERROR: #3 KSPSetUp() line 381 in /home/valera/petsc/src/ksp/
> ksp/interface/itfunc.c
> >
> > When running with:
> >
> > mpirun -n 1 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
> aijviennacl -pc_type saviennacl -log_view
> >
> >
> > Thanks,
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Wed, Aug 15, 2018 at 2:32 PM, Matthew Knepley 
> wrote:
> > On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera 
> wrote:
> > It seems to be resumed on: I do not know how to preallocate a DM Matrix
> correctly.
> >
> > There is only one matrix type, Mat. There are no separate DM matrices. A
> DM can create a matrix for you
> > using DMCreateMatrix(), but that is a Mat and it is preallocated
> correctly. I am not sure what you are doing.
> >
> >   Thanks,
> >
> > Matt
> >
> > The interesting part is that 

Re: [petsc-users] FIELDSPLIT fields

2018-08-15 Thread Griffith, Boyce Eugene


> On Aug 15, 2018, at 10:07 PM, Smith, Barry F.  wrote:
> 
> 
>   Yes you can have "overlapping fields" with FIELDSPLIT but I don't think you 
> can use FIELDSPLIT for your case. You seem to have a geometric decomposition 
> into regions. ASM and GASM are intended for such decompositions. Fieldsplit 
> is for multiple fields that each live across the entire domain.

Basically there is one field the lives on the entire domain, and another field 
that lives only on a subdomain.

Perhaps we could do GASM for the geometric split and FIELDSPLIT within the 
subdomain with the two fields.

>   Barry
> 
> 
>> On Aug 15, 2018, at 7:42 PM, Griffith, Boyce Eugene  
>> wrote:
>> 
>> Is it permissible to have overlapping fields in FIELDSPLIT? We are 
>> specifically thinking about how to handle DOFs living on the interface 
>> between two regions.
>> 
>> Thanks!
>> 
>> — Boyce
> 



Re: [petsc-users] FIELDSPLIT fields

2018-08-15 Thread Smith, Barry F.

   Yes you can have "overlapping fields" with FIELDSPLIT but I don't think you 
can use FIELDSPLIT for your case. You seem to have a geometric decomposition 
into regions. ASM and GASM are intended for such decompositions. Fieldsplit is 
for multiple fields that each live across the entire domain.

   Barry


> On Aug 15, 2018, at 7:42 PM, Griffith, Boyce Eugene  
> wrote:
> 
> Is it permissible to have overlapping fields in FIELDSPLIT? We are 
> specifically thinking about how to handle DOFs living on the interface 
> between two regions.
> 
> Thanks!
> 
> — Boyce



Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Smith, Barry F.


  Should be

call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
call DMCreateMatrix(daDummy,A,ierr)

  and remove the rest. You need to set the type of Mat you want the DM to 
return BEFORE you create the matrix.

  Barry



> On Aug 15, 2018, at 4:45 PM, Manuel Valera  wrote:
> 
> Ok thanks for clarifying that, i wasn't sure if there were different types,
> 
> Here is a stripped down version of my code, it seems like the preallocation 
> is working now since the matrix population part is working without problem, 
> but here it is for illustration purposes:
> 
> call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)
> call DMCreateMatrix(daDummy,A,ierr)
> call MatSetFromOptions(A,ierr)
> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
> call 
> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
> call MatSetUp(A,ierr)
> [...]
> call 
> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
> [...]
> call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
> call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)
> 
> Adding the first line there did the trick,
> 
> Now the problem seems to be the program is not recognizing the matrix as 
> ViennaCL type when i try with more than one processor, i get now:
> 
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: No support for this operation for this object type
> [0]PETSC ERROR: Currently only handles ViennaCL matrices
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT 
> Date: 2018-05-31 17:31:13 +0300
> [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug 
> 15 14:44:22 2018
> [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug 
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2 
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1 
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
> [0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in 
> /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
> [0]PETSC ERROR: #2 PCSetUp() line 932 in 
> /home/valera/petsc/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: #3 KSPSetUp() line 381 in 
> /home/valera/petsc/src/ksp/ksp/interface/itfunc.c
> 
> When running with:
> 
> mpirun -n 1 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/ 
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type 
> aijviennacl -pc_type saviennacl -log_view
> 
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Wed, Aug 15, 2018 at 2:32 PM, Matthew Knepley  wrote:
> On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera  wrote:
> It seems to be resumed on: I do not know how to preallocate a DM Matrix 
> correctly.
> 
> There is only one matrix type, Mat. There are no separate DM matrices. A DM 
> can create a matrix for you
> using DMCreateMatrix(), but that is a Mat and it is preallocated correctly. I 
> am not sure what you are doing.
> 
>   Thanks,
> 
> Matt
>  
> The interesting part is that it only breaks when i need to populate a GPU 
> matrix from MPI, so kudos on that, but it seems i need to do better on my 
> code to get this setup working,
> 
> Any help would be appreciated,
> 
> Thanks,
> 
> 
> 
> On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley  wrote:
> On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera  wrote:
> Thanks Matthew, 
> 
> I try to do that when calling:
> 
> call 
> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
> 
> But i am not aware on how to do this for the DM if it needs something more 
> specific/different,
> 
> The error says that your preallocation is wrong for the values you are 
> putting in. The DM does not control either,
> so I do not understand your email.
> 
>   Thanks,
> 
>  Matt
>  
> Thanks,
> 
> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley  wrote:
> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera  wrote:
> Hello PETSc devs,
> 
> I am running into an error when trying to use the MATMPIAIJVIENNACL Matrix 
> type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in one 
> processor. The error happens when calling MatSetValues for this specific 
> configuration. It does not occur when using MPI DMMatrix types only.
> 
> The DM properly preallocates the matrix. I am assuming you do not here.
> 
>Matt
>  
> Any help will be appreciated, 
> 
> Thanks,
> 
> 
> 
> My program call:
> 
> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/ 
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type 
> aijviennacl -pc_type saviennacl -log_view 
> 
> 
> The error (repeats after each MatSetValues call):
> 
> [1]PETSC ERROR: - Error Message 
> 

Re: [petsc-users] FIELDSPLIT fields

2018-08-15 Thread Griffith, Boyce Eugene


On Aug 15, 2018, at 9:17 PM, Matthew Knepley 
mailto:knep...@gmail.com>> wrote:

On Wed, Aug 15, 2018 at 8:42 PM Griffith, Boyce Eugene 
mailto:boy...@email.unc.edu>> wrote:
Is it permissible to have overlapping fields in FIELDSPLIT? We are specifically 
thinking about how to handle DOFs living on the interface between two regions.

There is only 1 IS, so no way to do RASM, or any other thing on the overlap. 
This sort of things was supposed to be handled by GASM.

There are three big blocks, with one interface between two of them. It may not 
make much difference which subdomain the interfacial DOFs are assigned to.

I am not sure that works. If you want blocks that are not parallel, then you 
can probably use PCPATCH as soon as I get it merged.

   Matt

Thanks!

— Boyce


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] FIELDSPLIT fields

2018-08-15 Thread Matthew Knepley
On Wed, Aug 15, 2018 at 8:42 PM Griffith, Boyce Eugene 
wrote:

> Is it permissible to have overlapping fields in FIELDSPLIT? We are
> specifically thinking about how to handle DOFs living on the interface
> between two regions.
>

There is only 1 IS, so no way to do RASM, or any other thing on the
overlap. This sort of things was supposed to be handled by GASM.
I am not sure that works. If you want blocks that are not parallel, then
you can probably use PCPATCH as soon as I get it merged.

   Matt


> Thanks!
>
> — Boyce



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] FIELDSPLIT fields

2018-08-15 Thread Griffith, Boyce Eugene
Is it permissible to have overlapping fields in FIELDSPLIT? We are specifically 
thinking about how to handle DOFs living on the interface between two regions.

Thanks!

— Boyce

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Matthew Knepley
On Wed, Aug 15, 2018 at 5:45 PM Manuel Valera  wrote:

> Ok thanks for clarifying that, i wasn't sure if there were different types,
>
> Here is a stripped down version of my code, it seems like the
> preallocation is working now since the matrix population part is working
> without problem, but here it is for illustration purposes:
>

Good.


> call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)
>> call DMCreateMatrix(daDummy,A,ierr)
>> call MatSetFromOptions(A,ierr)
>> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>>
>
The two statements above should be unnecessary with the command line
options you have.


> call
>> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
>>
>
This should not be necessary since the matrix you get from DMCreateMatrix()
should be preallocated.


> call MatSetUp(A,ierr)
>> [...]
>> call
>> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>> [...]
>> call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
>> call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)
>
>
> Adding the first line there did the trick,
>
> Now the problem seems to be the program is not recognizing the matrix as
> ViennaCL type when i try with more than one processor, i get now:
>

This is a bad error message since it does not tell us the type of the
matrix. You could look in the debugger to see what type the
matrix is. I would get rid of all the extraneous statements above so we can
figure out what is happening.

   Matt


> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: No support for this operation for this object type
> [0]PETSC ERROR: Currently only handles ViennaCL matrices
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
> Date: 2018-05-31 17:31:13 +0300
> [0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
> Aug 15 14:44:22 2018
> [0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
> [0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
> /home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
> [0]PETSC ERROR: #2 PCSetUp() line 932 in
> /home/valera/petsc/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: #3 KSPSetUp() line 381 in
> /home/valera/petsc/src/ksp/ksp/interface/itfunc.c
>
> When running with:
>
> mpirun -n 1 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
> aijviennacl -pc_type saviennacl -log_view
>
>
> Thanks,
>
>
>
>
>
>
>
>
>
>
> On Wed, Aug 15, 2018 at 2:32 PM, Matthew Knepley 
> wrote:
>
>> On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera  wrote:
>>
>>> It seems to be resumed on: I do not know how to preallocate a DM Matrix
>>> correctly.
>>>
>>
>> There is only one matrix type, Mat. There are no separate DM matrices. A
>> DM can create a matrix for you
>> using DMCreateMatrix(), but that is a Mat and it is preallocated
>> correctly. I am not sure what you are doing.
>>
>>   Thanks,
>>
>> Matt
>>
>>
>>> The interesting part is that it only breaks when i need to populate a
>>> GPU matrix from MPI, so kudos on that, but it seems i need to do better on
>>> my code to get this setup working,
>>>
>>> Any help would be appreciated,
>>>
>>> Thanks,
>>>
>>>
>>>
>>> On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley 
>>> wrote:
>>>
 On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera 
 wrote:

> Thanks Matthew,
>
> I try to do that when calling:
>
> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
> PETSC_NULL_INTEGER,ierr)
>
> But i am not aware on how to do this for the DM if it needs something
> more specific/different,
>

 The error says that your preallocation is wrong for the values you are
 putting in. The DM does not control either,
 so I do not understand your email.

   Thanks,

  Matt


> Thanks,
>
> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley 
> wrote:
>
>> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera 
>> wrote:
>>
>>> Hello PETSc devs,
>>>
>>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type 
>>> in
>>> one processor. The error happens when calling MatSetValues for this
>>> specific configuration. It does not occur when using MPI DMMatrix types
>>> only.
>>>
>>
>> The DM properly preallocates the matrix. I am assuming you do not
>> here.
>>
>>Matt

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Manuel Valera
Ok thanks for clarifying that, i wasn't sure if there were different types,

Here is a stripped down version of my code, it seems like the preallocation
is working now since the matrix population part is working without problem,
but here it is for illustration purposes:

call DMSetMatrixPreallocateOnly(daDummy,PETSC_TRUE,ierr)
> call DMCreateMatrix(daDummy,A,ierr)
> call MatSetFromOptions(A,ierr)
> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
> call
> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
> call MatSetUp(A,ierr)
> [...]
> call
> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
> [...]
> call MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY, ierr)
> call MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY, ierr)


Adding the first line there did the trick,

Now the problem seems to be the program is not recognizing the matrix as
ViennaCL type when i try with more than one processor, i get now:

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Currently only handles ViennaCL matrices
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
Date: 2018-05-31 17:31:13 +0300
[0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 14:44:22 2018
[0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
[0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
/home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
[0]PETSC ERROR: #2 PCSetUp() line 932 in
/home/valera/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #3 KSPSetUp() line 381 in
/home/valera/petsc/src/ksp/ksp/interface/itfunc.c

When running with:

mpirun -n 1 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
aijviennacl -pc_type saviennacl -log_view


Thanks,










On Wed, Aug 15, 2018 at 2:32 PM, Matthew Knepley  wrote:

> On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera  wrote:
>
>> It seems to be resumed on: I do not know how to preallocate a DM Matrix
>> correctly.
>>
>
> There is only one matrix type, Mat. There are no separate DM matrices. A
> DM can create a matrix for you
> using DMCreateMatrix(), but that is a Mat and it is preallocated
> correctly. I am not sure what you are doing.
>
>   Thanks,
>
> Matt
>
>
>> The interesting part is that it only breaks when i need to populate a GPU
>> matrix from MPI, so kudos on that, but it seems i need to do better on my
>> code to get this setup working,
>>
>> Any help would be appreciated,
>>
>> Thanks,
>>
>>
>>
>> On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley 
>> wrote:
>>
>>> On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera 
>>> wrote:
>>>
 Thanks Matthew,

 I try to do that when calling:

 call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
 NULL_INTEGER,ierr)

 But i am not aware on how to do this for the DM if it needs something
 more specific/different,

>>>
>>> The error says that your preallocation is wrong for the values you are
>>> putting in. The DM does not control either,
>>> so I do not understand your email.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
 Thanks,

 On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley 
 wrote:

> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera 
> wrote:
>
>> Hello PETSc devs,
>>
>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type 
>> in
>> one processor. The error happens when calling MatSetValues for this
>> specific configuration. It does not occur when using MPI DMMatrix types
>> only.
>>
>
> The DM properly preallocates the matrix. I am assuming you do not here.
>
>Matt
>
>
>> Any help will be appreciated,
>>
>> Thanks,
>>
>>
>>
>> My program call:
>>
>> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
>> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
>> aijviennacl -pc_type saviennacl -log_view
>>
>>
>> The error (repeats after each MatSetValues call):
>>
>> [1]PETSC ERROR: - Error Message
>> --
>> [1]PETSC ERROR: Argument out of range
>> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
>> into matrix
>> [1]PETSC ERROR: See 

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Matthew Knepley
On Wed, Aug 15, 2018 at 5:20 PM Manuel Valera  wrote:

> It seems to be resumed on: I do not know how to preallocate a DM Matrix
> correctly.
>

There is only one matrix type, Mat. There are no separate DM matrices. A DM
can create a matrix for you
using DMCreateMatrix(), but that is a Mat and it is preallocated correctly.
I am not sure what you are doing.

  Thanks,

Matt


> The interesting part is that it only breaks when i need to populate a GPU
> matrix from MPI, so kudos on that, but it seems i need to do better on my
> code to get this setup working,
>
> Any help would be appreciated,
>
> Thanks,
>
>
>
> On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley 
> wrote:
>
>> On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera  wrote:
>>
>>> Thanks Matthew,
>>>
>>> I try to do that when calling:
>>>
>>> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
>>> PETSC_NULL_INTEGER,ierr)
>>>
>>> But i am not aware on how to do this for the DM if it needs something
>>> more specific/different,
>>>
>>
>> The error says that your preallocation is wrong for the values you are
>> putting in. The DM does not control either,
>> so I do not understand your email.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks,
>>>
>>> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley 
>>> wrote:
>>>
 On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera 
 wrote:

> Hello PETSc devs,
>
> I am running into an error when trying to use the MATMPIAIJVIENNACL
> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
> one processor. The error happens when calling MatSetValues for this
> specific configuration. It does not occur when using MPI DMMatrix types
> only.
>

 The DM properly preallocates the matrix. I am assuming you do not here.

Matt


> Any help will be appreciated,
>
> Thanks,
>
>
>
> My program call:
>
> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
> aijviennacl -pc_type saviennacl -log_view
>
>
> The error (repeats after each MatSetValues call):
>
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Argument out of range
> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
> into matrix
> [1]PETSC ERROR: See
> http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble
> shooting.
> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53
> GIT Date: 2018-05-31 17:31:13 +0300
> [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera
> Wed Aug 15 13:10:44 2018
> [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 
> --download-viennacl
> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
> /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: #2 MatSetValues() line 1339 in
> /home/valera/petsc/src/mat/interface/matrix.c
>
>
> My Code structure:
>
> call DMCreateMatrix(daDummy,A,ierr)
>> call MatSetFromOptions(A,ierr)
>> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
>> if (numprocs > 1) then  ! set matrix type parallel
>> ! Get local size
>> call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
>> call VecGetLocalSize(Tmpnat,locsize,ierr)
>> call VecDestroy(Tmpnat,ierr)
>> ! Set matrix
>> #ifdef GPU
>> call MatSetType(A,MATAIJVIENNACL,ierr)
>> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>> print*,'SETTING GPU TYPES'
>> #else
>> call DMSetMatType(daDummy,MATMPIAIJ,ierr)
>> call DMSetMatType(daDummy,VECMPI,ierr)
>> call MatSetType(A,MATMPIAIJ,ierr)!
>> #endif
>> call
>> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
>> else! set matrix type sequential
>> #ifdef GPU
>> call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
>> call MatSetType(A,MATSEQAIJVIENNACL,ierr)
>> print*,'SETTING GPU TYPES'
>> #else
>> call DMSetMatType(daDummy,MATSEQAIJ,ierr)
>> call DMSetMatType(daDummy,VECSEQ,ierr)
>> call MatSetType(A,MATSEQAIJ,ierr)
>> #endif
>> call MatSetUp(A,ierr)
>> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>>
>
>
>> do k=zstart,zend-1
>> do j=ystart,yend-1
>> do i=xstart,xend-1
>> [..]
>>call
>> 

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Manuel Valera
It seems to be resumed on: I do not know how to preallocate a DM Matrix
correctly.

The interesting part is that it only breaks when i need to populate a GPU
matrix from MPI, so kudos on that, but it seems i need to do better on my
code to get this setup working,

Any help would be appreciated,

Thanks,



On Wed, Aug 15, 2018 at 2:15 PM, Matthew Knepley  wrote:

> On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera  wrote:
>
>> Thanks Matthew,
>>
>> I try to do that when calling:
>>
>> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
>> NULL_INTEGER,ierr)
>>
>> But i am not aware on how to do this for the DM if it needs something
>> more specific/different,
>>
>
> The error says that your preallocation is wrong for the values you are
> putting in. The DM does not control either,
> so I do not understand your email.
>
>   Thanks,
>
>  Matt
>
>
>> Thanks,
>>
>> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley 
>> wrote:
>>
>>> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera 
>>> wrote:
>>>
 Hello PETSc devs,

 I am running into an error when trying to use the MATMPIAIJVIENNACL
 Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
 one processor. The error happens when calling MatSetValues for this
 specific configuration. It does not occur when using MPI DMMatrix types
 only.

>>>
>>> The DM properly preallocates the matrix. I am assuming you do not here.
>>>
>>>Matt
>>>
>>>
 Any help will be appreciated,

 Thanks,



 My program call:

 mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
 jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
 aijviennacl -pc_type saviennacl -log_view


 The error (repeats after each MatSetValues call):

 [1]PETSC ERROR: - Error Message
 --
 [1]PETSC ERROR: Argument out of range
 [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
 into matrix
 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
 for trouble shooting.
 [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53
 GIT Date: 2018-05-31 17:31:13 +0300
 [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
 Aug 15 13:10:44 2018
 [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
 --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
 --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
 --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64
 --download-viennacl
 [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
 /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
 [1]PETSC ERROR: #2 MatSetValues() line 1339 in
 /home/valera/petsc/src/mat/interface/matrix.c


 My Code structure:

 call DMCreateMatrix(daDummy,A,ierr)
> call MatSetFromOptions(A,ierr)
> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
> if (numprocs > 1) then  ! set matrix type parallel
> ! Get local size
> call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
> call VecGetLocalSize(Tmpnat,locsize,ierr)
> call VecDestroy(Tmpnat,ierr)
> ! Set matrix
> #ifdef GPU
> call MatSetType(A,MATAIJVIENNACL,ierr)
> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
> print*,'SETTING GPU TYPES'
> #else
> call DMSetMatType(daDummy,MATMPIAIJ,ierr)
> call DMSetMatType(daDummy,VECMPI,ierr)
> call MatSetType(A,MATMPIAIJ,ierr)!
> #endif
> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
> PETSC_NULL_INTEGER,ierr)
> else! set matrix type sequential
> #ifdef GPU
> call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
> call MatSetType(A,MATSEQAIJVIENNACL,ierr)
> print*,'SETTING GPU TYPES'
> #else
> call DMSetMatType(daDummy,MATSEQAIJ,ierr)
> call DMSetMatType(daDummy,VECSEQ,ierr)
> call MatSetType(A,MATSEQAIJ,ierr)
> #endif
> call MatSetUp(A,ierr)
> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>


> do k=zstart,zend-1
> do j=ystart,yend-1
> do i=xstart,xend-1
> [..]
>call MatSetValues(A,1,row,sumpos,
> pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
> [..]






>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/ 
>>>
>>
>>
>
> --
> What most experimenters take for granted 

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Matthew Knepley
On Wed, Aug 15, 2018 at 4:53 PM Manuel Valera  wrote:

> Thanks Matthew,
>
> I try to do that when calling:
>
> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
> PETSC_NULL_INTEGER,ierr)
>
> But i am not aware on how to do this for the DM if it needs something more
> specific/different,
>

The error says that your preallocation is wrong for the values you are
putting in. The DM does not control either,
so I do not understand your email.

  Thanks,

 Matt


> Thanks,
>
> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley 
> wrote:
>
>> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera  wrote:
>>
>>> Hello PETSc devs,
>>>
>>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
>>> one processor. The error happens when calling MatSetValues for this
>>> specific configuration. It does not occur when using MPI DMMatrix types
>>> only.
>>>
>>
>> The DM properly preallocates the matrix. I am assuming you do not here.
>>
>>Matt
>>
>>
>>> Any help will be appreciated,
>>>
>>> Thanks,
>>>
>>>
>>>
>>> My program call:
>>>
>>> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
>>> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
>>> aijviennacl -pc_type saviennacl -log_view
>>>
>>>
>>> The error (repeats after each MatSetValues call):
>>>
>>> [1]PETSC ERROR: - Error Message
>>> --
>>> [1]PETSC ERROR: Argument out of range
>>> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
>>> into matrix
>>> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> for trouble shooting.
>>> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
>>> Date: 2018-05-31 17:31:13 +0300
>>> [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
>>> Aug 15 13:10:44 2018
>>> [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
>>> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
>>> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
>>> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
>>> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
>>> /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
>>> [1]PETSC ERROR: #2 MatSetValues() line 1339 in
>>> /home/valera/petsc/src/mat/interface/matrix.c
>>>
>>>
>>> My Code structure:
>>>
>>> call DMCreateMatrix(daDummy,A,ierr)
 call MatSetFromOptions(A,ierr)
 call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
 if (numprocs > 1) then  ! set matrix type parallel
 ! Get local size
 call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
 call VecGetLocalSize(Tmpnat,locsize,ierr)
 call VecDestroy(Tmpnat,ierr)
 ! Set matrix
 #ifdef GPU
 call MatSetType(A,MATAIJVIENNACL,ierr)
 call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
 call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
 print*,'SETTING GPU TYPES'
 #else
 call DMSetMatType(daDummy,MATMPIAIJ,ierr)
 call DMSetMatType(daDummy,VECMPI,ierr)
 call MatSetType(A,MATMPIAIJ,ierr)!
 #endif
 call
 MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
 else! set matrix type sequential
 #ifdef GPU
 call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
 call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
 call MatSetType(A,MATSEQAIJVIENNACL,ierr)
 print*,'SETTING GPU TYPES'
 #else
 call DMSetMatType(daDummy,MATSEQAIJ,ierr)
 call DMSetMatType(daDummy,VECSEQ,ierr)
 call MatSetType(A,MATSEQAIJ,ierr)
 #endif
 call MatSetUp(A,ierr)
 call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)

>>>
>>>
 do k=zstart,zend-1
 do j=ystart,yend-1
 do i=xstart,xend-1
 [..]
call
 MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
 [..]
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/ 
>>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Manuel Valera
Ok,

I removed the MatSetType call and added DMSetMatrixPreallocateOnly before
creating the matrix and this worked out the error, but the same situation
persist, when i try to run with more than one processor it says:

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: Currently only handles ViennaCL matrices
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
Date: 2018-05-31 17:31:13 +0300
[0]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 14:09:31 2018
[0]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
[0]PETSC ERROR: #1 PCSetUp_SAVIENNACL() line 47 in
/home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
[0]PETSC ERROR: #2 PCSetUp() line 932 in
/home/valera/petsc/src/ksp/pc/interface/precon.c
[0]PETSC ERROR: #3 KSPSetUp() line 381 in
/home/valera/petsc/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR:

[1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[1]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[1]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[1]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X
to find memory corruption errors
[1]PETSC ERROR: likely location of problem given in stack below
[1]PETSC ERROR: -  Stack Frames

[1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[1]PETSC ERROR:   INSTEAD the line number of the start of the function
[1]PETSC ERROR:   is given.
[1]PETSC ERROR: [1] PetscTraceBackErrorHandler line 182
/home/valera/petsc/src/sys/error/errtrace.c
[1]PETSC ERROR: [1] PetscError line 352
/home/valera/petsc/src/sys/error/err.c
[1]PETSC ERROR: [1] PCSetUp_SAVIENNACL line 45
/home/valera/petsc/src/ksp/pc/impls/saviennaclcuda/saviennacl.cu
[1]PETSC ERROR: [1] PCSetUp line 894
/home/valera/petsc/src/ksp/pc/interface/precon.c
[1]PETSC ERROR: [1] KSPSetUp line 294
/home/valera/petsc/src/ksp/ksp/interface/itfunc.c
[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Signal received
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
Date: 2018-05-31 17:31:13 +0300
[1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 14:09:31 2018
[1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl

Thanks,







On Wed, Aug 15, 2018 at 1:53 PM, Manuel Valera  wrote:

> Thanks Matthew,
>
> I try to do that when calling:
>
> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_
> NULL_INTEGER,ierr)
>
> But i am not aware on how to do this for the DM if it needs something more
> specific/different,
>
> Thanks,
>
> On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley 
> wrote:
>
>> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera  wrote:
>>
>>> Hello PETSc devs,
>>>
>>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
>>> one processor. The error happens when calling MatSetValues for this
>>> specific configuration. It does not occur when using MPI DMMatrix types
>>> only.
>>>
>>
>> The DM properly preallocates the matrix. I am assuming you do not here.
>>
>>Matt
>>
>>
>>> Any help will be appreciated,
>>>
>>> Thanks,
>>>
>>>
>>>
>>> My program call:
>>>
>>> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
>>> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
>>> aijviennacl -pc_type saviennacl -log_view
>>>
>>>
>>> The error (repeats after each MatSetValues call):
>>>
>>> [1]PETSC ERROR: - Error Message
>>> --
>>> [1]PETSC ERROR: Argument out of range
>>> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
>>> into matrix
>>> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>>> for trouble shooting.
>>> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
>>> Date: 2018-05-31 17:31:13 

Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Manuel Valera
Thanks Matthew,

I try to do that when calling:

call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
PETSC_NULL_INTEGER,ierr)

But i am not aware on how to do this for the DM if it needs something more
specific/different,

Thanks,

On Wed, Aug 15, 2018 at 1:51 PM, Matthew Knepley  wrote:

> On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera  wrote:
>
>> Hello PETSc devs,
>>
>> I am running into an error when trying to use the MATMPIAIJVIENNACL
>> Matrix type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in
>> one processor. The error happens when calling MatSetValues for this
>> specific configuration. It does not occur when using MPI DMMatrix types
>> only.
>>
>
> The DM properly preallocates the matrix. I am assuming you do not here.
>
>Matt
>
>
>> Any help will be appreciated,
>>
>> Thanks,
>>
>>
>>
>> My program call:
>>
>> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
>> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
>> aijviennacl -pc_type saviennacl -log_view
>>
>>
>> The error (repeats after each MatSetValues call):
>>
>> [1]PETSC ERROR: - Error Message
>> --
>> [1]PETSC ERROR: Argument out of range
>> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50)
>> into matrix
>> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
>> Date: 2018-05-31 17:31:13 +0300
>> [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
>> Aug 15 13:10:44 2018
>> [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
>> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
>> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
>> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64
>> --download-viennacl
>> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
>> /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
>> [1]PETSC ERROR: #2 MatSetValues() line 1339 in /home/valera/petsc/src/mat/
>> interface/matrix.c
>>
>>
>> My Code structure:
>>
>> call DMCreateMatrix(daDummy,A,ierr)
>>> call MatSetFromOptions(A,ierr)
>>> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
>>> if (numprocs > 1) then  ! set matrix type parallel
>>> ! Get local size
>>> call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
>>> call VecGetLocalSize(Tmpnat,locsize,ierr)
>>> call VecDestroy(Tmpnat,ierr)
>>> ! Set matrix
>>> #ifdef GPU
>>> call MatSetType(A,MATAIJVIENNACL,ierr)
>>> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>>> print*,'SETTING GPU TYPES'
>>> #else
>>> call DMSetMatType(daDummy,MATMPIAIJ,ierr)
>>> call DMSetMatType(daDummy,VECMPI,ierr)
>>> call MatSetType(A,MATMPIAIJ,ierr)!
>>> #endif
>>> call MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,
>>> PETSC_NULL_INTEGER,ierr)
>>> else! set matrix type sequential
>>> #ifdef GPU
>>> call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
>>> call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
>>> call MatSetType(A,MATSEQAIJVIENNACL,ierr)
>>> print*,'SETTING GPU TYPES'
>>> #else
>>> call DMSetMatType(daDummy,MATSEQAIJ,ierr)
>>> call DMSetMatType(daDummy,VECSEQ,ierr)
>>> call MatSetType(A,MATSEQAIJ,ierr)
>>> #endif
>>> call MatSetUp(A,ierr)
>>> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>>>
>>
>>
>>> do k=zstart,zend-1
>>> do j=ystart,yend-1
>>> do i=xstart,xend-1
>>> [..]
>>>call MatSetValues(A,1,row,sumpos,
>>> pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>>> [..]
>>
>>
>>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ 
>


Re: [petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Matthew Knepley
On Wed, Aug 15, 2018 at 4:39 PM Manuel Valera  wrote:

> Hello PETSc devs,
>
> I am running into an error when trying to use the MATMPIAIJVIENNACL Matrix
> type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in one
> processor. The error happens when calling MatSetValues for this specific
> configuration. It does not occur when using MPI DMMatrix types only.
>

The DM properly preallocates the matrix. I am assuming you do not here.

   Matt


> Any help will be appreciated,
>
> Thanks,
>
>
>
> My program call:
>
> mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
> jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
> aijviennacl -pc_type saviennacl -log_view
>
>
> The error (repeats after each MatSetValues call):
>
> [1]PETSC ERROR: - Error Message
> --
> [1]PETSC ERROR: Argument out of range
> [1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50) into
> matrix
> [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
> Date: 2018-05-31 17:31:13 +0300
> [1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed
> Aug 15 13:10:44 2018
> [1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
> --with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
> --FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
> --CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
> [1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
> /home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
> [1]PETSC ERROR: #2 MatSetValues() line 1339 in
> /home/valera/petsc/src/mat/interface/matrix.c
>
>
> My Code structure:
>
> call DMCreateMatrix(daDummy,A,ierr)
>> call MatSetFromOptions(A,ierr)
>> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
>> if (numprocs > 1) then  ! set matrix type parallel
>> ! Get local size
>> call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
>> call VecGetLocalSize(Tmpnat,locsize,ierr)
>> call VecDestroy(Tmpnat,ierr)
>> ! Set matrix
>> #ifdef GPU
>> call MatSetType(A,MATAIJVIENNACL,ierr)
>> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
>> print*,'SETTING GPU TYPES'
>> #else
>> call DMSetMatType(daDummy,MATMPIAIJ,ierr)
>> call DMSetMatType(daDummy,VECMPI,ierr)
>> call MatSetType(A,MATMPIAIJ,ierr)!
>> #endif
>> call
>> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
>> else! set matrix type sequential
>> #ifdef GPU
>> call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
>> call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
>> call MatSetType(A,MATSEQAIJVIENNACL,ierr)
>> print*,'SETTING GPU TYPES'
>> #else
>> call DMSetMatType(daDummy,MATSEQAIJ,ierr)
>> call DMSetMatType(daDummy,VECSEQ,ierr)
>> call MatSetType(A,MATSEQAIJ,ierr)
>> #endif
>> call MatSetUp(A,ierr)
>> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>>
>
>
>> do k=zstart,zend-1
>> do j=ystart,yend-1
>> do i=xstart,xend-1
>> [..]
>>call
>> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
>> [..]
>
>
>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] MatSetValues error with ViennaCL types

2018-08-15 Thread Manuel Valera
Hello PETSc devs,

I am running into an error when trying to use the MATMPIAIJVIENNACL Matrix
type in MPI calls, the same code runs for MATSEQAIJVIENNACL type in one
processor. The error happens when calling MatSetValues for this specific
configuration. It does not occur when using MPI DMMatrix types only.

Any help will be appreciated,

Thanks,



My program call:

mpirun -n 2 ./gcmLEP.GPU tc=TestCases/LockRelease/LE_6x6x6/
jid=tiny_cuda_test_n2 -ksp_type cg -dm_vec_type viennacl -dm_mat_type
aijviennacl -pc_type saviennacl -log_view


The error (repeats after each MatSetValues call):

[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Argument out of range
[1]PETSC ERROR: Inserting a new nonzero at global row/column (75, 50) into
matrix
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[1]PETSC ERROR: Petsc Development GIT revision: v3.9.2-549-g779ab53  GIT
Date: 2018-05-31 17:31:13 +0300
[1]PETSC ERROR: ./gcmLEP.GPU on a cuda-debug named node50 by valera Wed Aug
15 13:10:44 2018
[1]PETSC ERROR: Configure options PETSC_ARCH=cuda-debug
--with-mpi-dir=/usr/lib64/openmpi --COPTFLAGS=-O2 --CXXOPTFLAGS=-O2
--FOPTFLAGS=-O2 --with-shared-libraries=1 --with-debugging=1 --with-cuda=1
--CUDAFLAGS=-arch=sm_60 --with-blaslapack-dir=/usr/lib64 --download-viennacl
[1]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 608 in
/home/valera/petsc/src/mat/impls/aij/mpi/mpiaij.c
[1]PETSC ERROR: #2 MatSetValues() line 1339 in
/home/valera/petsc/src/mat/interface/matrix.c


My Code structure:

call DMCreateMatrix(daDummy,A,ierr)
> call MatSetFromOptions(A,ierr)
> call MPI_Comm_size(PETSC_COMM_WORLD, numprocs, ierr)
> if (numprocs > 1) then  ! set matrix type parallel
> ! Get local size
> call DMDACreateNaturalVector(daDummy,Tmpnat,ierr)
> call VecGetLocalSize(Tmpnat,locsize,ierr)
> call VecDestroy(Tmpnat,ierr)
> ! Set matrix
> #ifdef GPU
> call MatSetType(A,MATAIJVIENNACL,ierr)
> call DMSetMatType(daDummy,MATMPIAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECMPIVIENNACL,ierr)
> print*,'SETTING GPU TYPES'
> #else
> call DMSetMatType(daDummy,MATMPIAIJ,ierr)
> call DMSetMatType(daDummy,VECMPI,ierr)
> call MatSetType(A,MATMPIAIJ,ierr)!
> #endif
> call
> MatMPIAIJSetPreallocation(A,19,PETSC_NULL_INTEGER,19,PETSC_NULL_INTEGER,ierr)
> else! set matrix type sequential
> #ifdef GPU
> call DMSetMatType(daDummy,MATSEQAIJVIENNACL,ierr)
> call DMSetVecType(daDummy,VECSEQVIENNACL,ierr)
> call MatSetType(A,MATSEQAIJVIENNACL,ierr)
> print*,'SETTING GPU TYPES'
> #else
> call DMSetMatType(daDummy,MATSEQAIJ,ierr)
> call DMSetMatType(daDummy,VECSEQ,ierr)
> call MatSetType(A,MATSEQAIJ,ierr)
> #endif
> call MatSetUp(A,ierr)
> call getCenterInfo(daGrid,xstart,ystart,zstart,xend,yend,zend)
>


> do k=zstart,zend-1
> do j=ystart,yend-1
> do i=xstart,xend-1
> [..]
>call
> MatSetValues(A,1,row,sumpos,pos(0:iter-1),vals(0:iter-1),INSERT_VALUES,ierr)
> [..]