> I might go that way: I already have pure-Python code that implements it > and that I have been using for a year or so.
Fair enough -- though you'd probably get a big speed up moving to cython. >> It's a little slower on large graphs, but it still is pretty quick. > > Can you put any numbers on the difference, or rule of thumbs? Is there an > algorithmic complexity difference? If so, what kind of dependency are we > talking about? Is it only in the prefactors? Well, the hungarian algorithm has a theoretical upper bound of O(n^3), with n being the number of nodes, which is pretty much the best you can do if you have a dense graph and make no assumptions on capacities. If you have a sparse graph, other algorithms can exploit that structure a bit more efficiently (e.g. lemon's implementation is O(nm logn), with m being the number of edges). Other algorithms can use capacities and include the capacity bounds as part of the theoretic performance of the algorithms, e.g. http://arxiv.org/abs/1105.1569. However, all these bounds are generally very much upper bounds and actual practical performance varies greatly. The hungarian algorithm is popular mainly because it's the best one that can be coded up efficiently in a few hundred lines of code. Hope that answers your questions :-). --Hoyt ++++++++++++++++++++++++++++++++++++++++++++++++ + Hoyt Koepke + University of Washington Department of Statistics + http://www.stat.washington.edu/~hoytak/ + [email protected] ++++++++++++++++++++++++++++++++++++++++++ _______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
