On Mon, Aug 11, 2025 at 12:32 PM Yongzhong Li <yongzhong...@mail.utoronto.ca> wrote:
> Dear PETSc’s developer, > > Hi, I am a user of PETSc. I have some questions about how we can configure > PETSc with AOCL BLAS and LAPACK. > > > > Previously, we linked PETSc with Intel MKL BLAS. This solution provides us > with much better multithreading capability for sparse matrix-vector product > compared with configuring PETSc with OpenBLAS. Now, our compute nodes have > been upgraded with AMD CPUs, we are considering switching from Intel MKL to > AMD AOCL. > > My questions are: > > > > 1. If we configure PETSc in compile time with *—with-blaslapack-dir = > $AOCLROOT*, will we be able to use AOCL BLAS as the backend of PETSc > MatMult() API? > > Do you mean use the AMD sparse matvec? We have a HIP implementation ( https://urldefense.us/v3/__https://petsc.org/main/manualpages/Mat/MATAIJHIPSPARSE/__;!!G_uCfscf7eWS!buskXbl-Io-3aDFucXGpiYbfFRVN3fRcpRosr-WXF1em8NBn5a8PHltFj0SILAhBQspywGsr9f7BuSqHFfDA$ ), but nothing for AOCL comparable to the AIJMKL class. If you think we need it, we would certainly help implement it. > > 1. > 2. What if AOCL BLAS and AOCL LAPACK are installed in two different > directories, not under AOCLROOT? > > You would use -with-blaslapack-lib=[liblist] > > 1. > 2. PETSc has MatAIJMKL type for sparse matrix stored in Intel MKL > format. Does PETSc also have another type for AMD AOCL? > > No, but it would be straightforward to add. Thanks, Matt > Thanks! > > Yongzhong > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!buskXbl-Io-3aDFucXGpiYbfFRVN3fRcpRosr-WXF1em8NBn5a8PHltFj0SILAhBQspywGsr9f7BuSj6N8eE$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!buskXbl-Io-3aDFucXGpiYbfFRVN3fRcpRosr-WXF1em8NBn5a8PHltFj0SILAhBQspywGsr9f7BuYXAMWM-$ >