On Sat, Apr 29, 2006 at 12:38:22AM +0200, Adrian von Bidder wrote: > On Thursday 27 April 2006 21:53, Ralf Treinen wrote: > > The constraint solving algorithm is complete, that is it finds a > > solution whenever there exists one, even for multiple disjunctive > > dependencies and deep package conflicts. This problem is computationally > > intractable in theory (that is, NP-complete), but can in practice be > > solved very efficiently. > > Can you elaborate? I can see 3 possibilities: > > * P = NP and upstream found this and sits on the result > * The (NP complete) problem the tool claims to solve is not the exact > problem the tool solves in reality, thus the tool solves a different > problem. > * The problem is NP complete but this is not relevant yet, with only a few > 10000 thousand packages. Try with a few 100000 packages and be prepared to > wait hours and/or invest in a few TB main memory... ;-) > > I guess it's the second, or perhaps the thirs.
As far as I know it is not (1) (but who knows ? :-) The answer is (3). You can find a formal proof of NP-completeness by reduction of 3SAT on page 49 of this document: http://www.edos-project.org/xwiki/bin/download/Main/Deliverables/edos-wp2d1.pdf and the reverse reduction of dependency satisfiability into the satisfiability problem of Boolean formulas on page 60 of this document: http://www.edos-project.org/xwiki/bin/download/Main/Deliverables/edos-wp2d2.pdf In fact, the debcheck tool I talked about (which is not to be confused with the tool of the same name written by weasel, and which is currently used by the qa team) is based on a translation of the problem into satisfiability of a Boolean (propositional) formulae, and then a custom build SAT solver. The apparent paradox that satisfibiality of Boolean propositional formulas, though being a NP-complete problem, can often be solved quite efficiently is well known in the community of so-called SAT solvers (see, for instance, at the end of the wikipedia entry http://en.wikipedia.org/wiki/Boolean_satisfiability_problem). This surprising observation can be explained by the fact that hardness results in complexity theory are about *worst-case* complexities. The problem instances we encounter in practice are often much better behaved than the worst case, they are not even randomly choosen. In fact we do not expect the behaviour of a the tool to degrade when we pass from 20.000 packages to 200.000 or to 2.000.000 (except by a linear factor, of course), provided that the inter-dependencies between packages do not become more complicated. I hope this explains it. I would like to include some relevant stuff from the EDOS deliverables into the documentation of my tool, but I will need a permission from the EDOS project for that. -Ralf. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]