Thanks Mark. It is true that partitioning is NP-complete but its a bit strange to get result worse than initial numbering. I mean you could at the very least always compare against initial numbering and if its worse just use that one! I think that's what I'm gonna add to my own code then.
Anyways, as long as its a not a bug in my code, i'm happy. Thanks for the help On Thu, Feb 16, 2012 at 4:03 PM, Mark F. Adams <mark.adams at columbia.edu>wrote: > > On Feb 16, 2012, at 5:06 PM, Mohammad Mirzadeh wrote: > > Thanks everyone for participating in the discussion. I really appreciate > it. I guess I'm gonna describe the problem more in details here. So > basically I use parmetis to partition my quadtree grid (we have previously > talked about why i'm not using p4est right now). Please see figures for a > sample amr quadtree grid. > > For most cases, the partition quality that I get is reasonable like the > one for the smaller grid (maximum level of amr = 4) where I have > color-coded the grid points according to their proc_id (0, 1, 2 and 3) , > both before and after the partitioning. In this case, the program tells me > that: > > [0] # of edge cuts: 35 > [1] # of edge cuts: 35 > [2] # of edge cuts: 35 > [3] # of edge cuts: 35 > [0] Before: # of ghost Nodes: 35 > [1] Before: # of ghost Nodes: 29 > [2] Before: # of ghost Nodes: 35 > [3] Before: # of ghost Nodes: 23 > [0] After: # of ghost Nodes: 17 > [1] After: # of ghost Nodes: 20 > [2] After: # of ghost Nodes: 17 > [3] After: # of ghost Nodes: 16 > > Now, the partitioning is quite good and number of ghost points required > have been dropped after partitioning is applied. However, if i run the code > for a larger grid (maximum amr level = 8), the partitioning that I get is > actually worse than the initial numbering i started with (well at least for > proc 0, 1, 2) > > [0] # of edge cuts: 67 > [1] # of edge cuts: 67 > [2] # of edge cuts: 67 > [3] # of edge cuts: 67 > [0] Before: # of ghost Nodes: 58 > [1] Before: # of ghost Nodes: 67 > [2] Before: # of ghost Nodes: 62 > [3] Before: # of ghost Nodes: 44 > [0] After: # of ghost Nodes: 103 > [1] After: # of ghost Nodes: 115 > [2] After: # of ghost Nodes: 81 > [3] After: # of ghost Nodes: 28 > > as you can see for proc 0, 1, and 2 the number of required ghost points > have been doubled which means more communication. To me, this should not > happen (maybe i'm wrong here?). > > > Partitioning is an NP-something problem, ie, you do not get the optimal > solution in reasonable time. If you give ParMetis the optimal initial > partitioning it will ignore it and just run its algorithm and give you, in > general, a sub-optimal solution. It does not check that the initial > partition was better than what it produced. So you _can_ get a worse > partitioning out of ParMetis then what you gave it. > > Mark > > That's why i asked if minimizing edge cuts does not translate to having > fewer ghost points. > > Does minimizing latency favors creating 'clustered chunks' of points for > each processor and if so, could that create a partitioning with more ghost > points? > > One final point, I'm not sure how parmetis is calculating the edge cuts, > but if you simply count the number of cuts in the smaller graphs, you don't > get the reported numbers. why is that? (for instance proc 3, cuts 20 edges, > including extra 'diagonal' ones at the coarse-fine interfaces but parmetis > reports 35-- i can explain this more in details if needed) > > thanks, > Mohammad > > > > > > On Thu, Feb 16, 2012 at 6:21 AM, Mark F. Adams <mark.adams at > columbia.edu>wrote: > >> >> On Feb 16, 2012, at 8:46 AM, Matthew Knepley wrote: >> >> On Thu, Feb 16, 2012 at 7:43 AM, Gerard Gorman <g.gorman at >> imperial.ac.uk>wrote: >> >>> Matthew Knepley emailed the following on 16/02/12 13:29: >>> > On Thu, Feb 16, 2012 at 3:38 AM, Mohammad Mirzadeh <mirzadeh at gmail.com >>> > <mailto:mirzadeh at gmail.com>> wrote: >>> > >>> > Hi guys, >>> > >>> > I'm wondering if there is any implementation >>> > for ParMETIS_V3_PartGeomKway()? All I can find is the >>> > implementation for ParMETIS_V3_PartKway and I'm wondering if >>> > including the vertex positions could help me get a better >>> > partitioning? >>> > >>> > >>> > As far as communication goes, it will not help. >>> > >>> > >>> > Also I have a general question. Is minimizing number of edge cuts >>> > essentially the same as minimizing communication? I understand >>> > that they are related, but communication is proportional to the >>> > number of ghost points which is not exactly equal (its actually >>> > less than) to the number of edge cuts. So then is it possible that >>> > this could actually result in larger number of ghost points, at >>> > least for some of processors? >>> > >>> > >>> > It of course depends on your problem and the graph you draw, but you >>> > can always draw a graph >>> > where the edge cut is exactly your communication. >>> > >>> > >>> >>> >>> It might also be interesting to look at Zoltan's hypergraph partitioning >>> which can better balance communications - >>> http://www.cs.sandia.gov/zoltan/dev_html/dev_phg.html >> >> >> I would caution you that one thing people are rarely careful about is >> quantifying the effect of >> latency vs communication volume. I would bet you $10 that the lion's >> share of slow down is >> due to load imbalance/latency rather than communication volume (since >> comm links are so >> big). >> >> >> Just to add, edge cuts are a good proxy for communication volume on PDE >> graphs, IMO, but they are not the same and I applaud Zoltan for optimizing >> this directly. Both edge cuts and #ghosts are decent metrics to indirectly >> minimize number of partition neighbors (latency). As Matt says this is >> probably what you want to minimize, assuming good load balance. >> >> Furthermore you don't really want to minimize any of these. What you >> want to minimize is the maximum cuts/neighbors/load of an any partition -- >> that is load balance (max/optimal). >> >> Mark >> >> Matt >> >> >>> >>> Cheers >>> Gerard >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > <new_4_1.jpg><old_4_1.jpg><old_8_1.jpg><new_8_1.jpg> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120216/7df4751e/attachment-0001.htm>
