Hi Jed, I am back to work on this now...
Thanks for explaining how I can do this. it makes perfectly sense: A. Assign entries values = 1, when forming the Adj matrix, B. Convert the Adj matrix to AIJ matrix, so I can use MatMatTransposeMult (or form AIJ directly): can I preallocate AIJ before calling MatConvert? To avoid slow performance? Is there any conflict between the preallocoation functions and MatConvert? C. Filter AIJ, to remove entries with values <3 (for 3d problems). What function shall I use to do this operation? Thank you Paolo On Sun, Nov 3, 2013 at 1:34 PM, Jed Brown <[email protected]> wrote: > Paolo Orsini <[email protected]> writes: > > > The computation of the dual graph of the mesh is a bit more complicated > > than multiplying the adjacency matrix by its transpose, but not far off. > > With this operation, also the cells that share only one node are > connected > > in the dual graph. > > Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in > > 3D problems). In fact, this is an input of MatMeshToCellGraph, I should > > have understood this before. > > > > This can be computed doing the transpose adjacency matrix (Adj_T), then > > doing the multiplication line by line of Adj time Adj_T, and discard the > > non zero entries coming from to elements that share a number of nodes > less > > than the minimum number of common nodes imposed. I have not implement > this > > yet, any suggestion is welcome. > > > You can just put in weights of 1.0, call MatMatTransposeMult, and filter > the resulting matrix. It'll take a bit more memory, but probably not > prohibitively much. > > > I also found out that Schotch has a facility to compute a dual graph > from a > > mesh, but not PTScotch. > > Once the graph is computed, PTSchotch can load the central dual graph, > and > > distribute it into several processors during the loading. > > Am i right to say that PETSC is interfaced only with PTSchotch and not > with > > Scotch? > > Scotch is serial so even if we had a specialized Scotch interface, it > would not be scalable (memory or time). > > > To check if the PTSchotch partition works (within PFLOTRAN ), I am > > computing a DualMat with parmetis, saving it into a file. Then I > recompile > > the code (using a petsc compiled with ptscotch), an load the DualMat > from a > > file rather then forming a new one. I did a successful test when running > on > > one processor. but I am having trouble when try on more. > > Make sure you load on the same communicator with the same sizes set (so > that the distribution matches what you expect). You'll have to be more > specific if you want more help. > > > I though the the dual graph was computed only once, even during the mpi > > process, instead it seems to be recomputed more than once. Not sure > why.... > > sure i am missing something ??? > > That seems like a question about your program logic. Should be easy to > figure out if you set a breakpoint in the debugger. >
