> The code says that is supports simple clause learning, but it seems to > always be disabled by default. Do you know why that is? >
There are many forms of clause learning. The current implementation of clause learning adds nothing to the solving power as the backtracking is done in such a way that the decisions are undone (as opposed to the variable settings). Basically, the currently implemented clause learning scheme (which is not 1UIP) adds nothing to the process (and so was disabled). > OK. We can support many backends, and do benchmarks and pick the best > one that is installed. pycosat is nice because, as I mentioned, it is > used by conda, so it will always be installed in the Anaconda Python > environment for example. > That's a very strong motivation I'd say -- not having to pre-package things / worry about licensing would be nice. > Bottom line is, if you can find a polynomial way to solve your problem > > (even if the solution you find seems shitty in practice), then chances > are > > there is some specially designed knowledge reasoning technique that would > > blow the SAT solving approach out of the water (even if you used the best > > SAT solver out there). This falls along the lines of what I was getting > at > > during my final weeks of the SoC project: > > - http://haz-tech.blogspot.com.au/2010/08/soc-wrap-up.html > > OK, thanks for pointing that out. So I think one thing we can do right > away is to try to use some heuristics to avoid the full SAT solver > when we don't need it, especially if it ends up being slow. For > instance, we can use the known_facts_dict. We can also try solving > against free propositions only first before constructing non-free > ones. And if all Q keys in question are over the same expression, we > can use only the free version. > > I'm not sure if the general problem will be polynomial. There are some > rather complicated facts, which I'm a little worried may grow to be > rather large. For instance, if we have a product, like x*y*z, and we > know that each term in the product is either positive or negative, > then the sign of the product is the parity of the number of negative > terms: if there are an even number of negative terms the product is > positive, and if there are an odd number of negative terms the sign is > negative. The only way I know to represent this last part would be to > enumerate the cases, like Implies(Or(Q.negative(x) & Q.positive(y) & > Q.positive(z), Q.positive(x) & Q.negative(y) & Q.positive(z), > Q.positive(x) & Q.positive(y) & Q.negative(z), ..., > Q.negative(x*y*z)). I don't know if this is polynomial or not, but I > wouldn't be surprised if it isn't. > <snip> > So it seems as though computing the relevant facts would be the computationally expensive case -- the SAT solver doesn't save you from generating exponentially many facts, does it? A polynomial operation over an exponentially large input is a crummy complexity, but still better than using an NP-Complete hammer to solve the exponentially large input. Cheers, Christian -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/sympy. For more options, visit https://groups.google.com/groups/opt_out.
