AIJ matrix format internally supports VBR listed below [called inodes in PETSc]
So I'm not sure what problem you are having. Satish On Tue, 14 May 2013, Longxiang Chen wrote: > VBR like in this link, use 6 arrays to represent a matrix. > http://docs.oracle.com/cd/E19061-01/hpc.cluster5/817-0086-10/prog-sparse-support.html > > Each row is a vertex in the graph, , and use parmetis to partition the > graph to minimize the number of cuts between different processors. (reduce > communication when calculate Matrix-Vector) > The matrix is calculated from Jacobian and construct the A and b from the > result of Jacobian (in VBR). > > > Best regards, > Longxiang Chen > > Do something every day that gets you closer to being done. > -------------------------------------------------------------- > 465 Winston Chung Hall > Computer Science Engineering > University of California, Riverside > > > > On Tue, May 14, 2013 at 2:51 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote: > > > What kind of VBR matrix? What are you partitioning using parmetis? A mesh? > > The blocks of the matrix? How do you create the entries in the matrix? > > On May 14, 2013 4:36 PM, "Longxiang Chen" <suifengls at gmail.com> wrote: > > > >> To whom it may concern, > >> > >> I use parmetis to partition a mesh for a sparse matrix. > >> Then I distribute the data to the appropriate processors according to > >> the result of partition. > >> > >> The sparse matrix is stored in Variable Block Row(VBR) format. > >> After the distribution, I want to call PETSc KSP solver to solve Ax = b. > >> I tried to convert VBR to AIJ or CSR format, but the data would be > >> re-distributed. > >> > >> The ideal method is to keep the distribution result from parmetis. > >> For example, after parmetis, processor 0 has 0, 1, 4, and processor 1 > >> has 2, 3, 5. I wish the PETSc would not change this distribution and > >> solve Ax = b. > >> > >> Are there any approaches to call KSP solver in VBR format from PETSc? > >> Or any suggestions for solving Ax = b? > >> > >> Thanks in advance. > >> > >> Regards, > >> Longxiang Chen > >> > >> >
