Thanks for pointing the function MatXAIJSetPreallocation() out. I followed your suggestions and tried it on my problem: preallocate the matrix with XAIJ, set its values by MatSetValuesBlockedLocal, and use this matrix with aijcusparse.
It works! Thank you. Xiangdong On Tue, Oct 29, 2019 at 8:10 PM Mills, Richard Tran <[email protected]> wrote: > We will let you know when this is ready, Xiangdong. > > Let me address a part of your original question that I don't think anyone > else noticed: > > In my current code, the Jacobian matrix preallocated and assembled as BAIJ > format. Do I have to rewrite this part of code to preallocate and assemble > the matrix as AIJ in order to use aijcusparse? > > > If you are doing your preallocation via MatXAIJSetPreallocation() and > setting values via MatSetValuesBlocked() (or its variants), you can change > the type to AIJ (or AIJCUSPARSE) instead of BAIJ and things should just > work. (If not, let us know, as this may mean a bug in PETSc.) If you are > calling MatSetFromOptions(), you should be able to do this on the command > line (or otherwise through the PETSc options database) -- no code rewrites > needed. > > Best regards, > Richard > > > On 10/28/19 7:40 AM, Xiangdong wrote: > > Thanks for your information. Glad to hear that BAIJ gpu support is on the > way. Waiting a few weeks is not a issue at all. Once you finish the BAIJ > GPU interface, could you please make an announcement here or in the change > log? > > Thank you. > > Xiangdong > > On Sat, Oct 26, 2019 at 12:11 AM Mills, Richard Tran <[email protected]> > wrote: > >> Xiangdong, >> >> cuSPARSE does support block compressed sparse row (BAIJ) format, but we >> don't currently support that cuSPARSE functionality in PETSc. It should be >> easy to add, but we are currently refactoring the way we interface with >> third party GPU libraries such as cuSPARSE, and it would probably make more >> sense to add this support after that refactor is done. Do you need this >> right away, or could it wait maybe a few weeks until this is completed? >> >> Best regards, >> Richard >> >> On Fri, Oct 25, 2019 at 1:50 PM Smith, Barry F. via petsc-users < >> [email protected]> wrote: >> >>> >>> You would need to investigate if the Nvidia cuSPARSE package supports >>> such a format. If it does then it would be reasonably straightforward for >>> you to hook up the required interface from PETSc. If it does not then it is >>> a massive job to provide such code and you should see if any open source >>> packages provide such CUDA support and then you could hook PETSc up to use >>> that. >>> >>> Barry >>> >>> >>> > On Oct 25, 2019, at 3:43 PM, Xiangdong via petsc-users < >>> [email protected]> wrote: >>> > >>> > Can anyone comment on the PETSc's GPU version of Block CSR, say >>> BAIJCUSPARSE? Does it make sense to have such format on GPU? Is it under >>> development? >>> > >>> > Thank you. >>> > >>> > Xiangdong >>> > >>> > On Wed, Oct 23, 2019 at 11:36 AM Xiangdong <[email protected]> wrote: >>> > Hello everyone, >>> > >>> > I am wondering whether there is a format BAIJCUSPARSE for Block CSR on >>> GPU. >>> > >>> > In my current code, the Jacobian matrix preallocated and assembled as >>> BAIJ format. Do I have to rewrite this part of code to preallocate and >>> assemble the matrix as AIJ in order to use aijcusparse? >>> > >>> > Thank you. >>> > >>> > Xiangdong >>> > >>> >>> >
