I would time the various ways. Unless someone really understands the theory of DPLL well to know what will and won't be fast, I think this is the only way we can know what tricks to use when.
In general, though, if something is only faster for very small inputs, it's not really worth doing, because it's already not slow the way it is. It only serves to add unnecessary complexity. By the way, a good GSoC idea for someone would be to set up a good benchmarking system for us. We are good at avoiding behavioral regressions with testing and Travis, but performance regressions are tougher to guard against, because we don't check it very often. Aaron Meurer On Thu, Jan 30, 2014 at 7:29 AM, Sachin Joglekar <[email protected]> wrote: > This may seem like an ad-hoc way of doing things, but can we think of using > DNF for satisfiability of formulae with small number of atoms? Since DNF > generally causes an extremely dangerous explosion in the size of an > expression, its use to check satisfiability is usually frowned upon by the > logic community. However, checking the SAT of a DNF is trivial. Therefore, > since _we_ can check for the number of atoms used in a given expression, can > we just use the DNF method when the number of atoms falls below a certain > threshold (say 5 or 6)? > @asmeurer, what do you think? > > > On Wed, Jan 29, 2014 at 4:41 PM, Soumya Biswas <[email protected]> wrote: >> >> It seems somehow I had missed this conversation (on the issues page). It >> is definitely true that this transformation adds many auxiliary variables >> (as many as the number of non-atomic clauses). However what I am working on >> is to optimize this encoding to such an extent that the encoding plus >> satisfiability time is reduced overall. Firstly (as observed on the issues >> thread) handling duplicate subexpressions optimizes the process to some >> extent. Additionally, I worked out a way (I don't know if it is an obvious >> generalization or something new) such that And(a, b, c, ...) can be treated >> as only one operator (though the number of clauses this encoding will >> produce will still increase linearly with the number of arguments). This is >> what makes the method much faster for cases where the average number of >> arguments to And/Or is around 3 (or more). I find the point made in the >> issues page to be quite interesting that many of the encodings are quite >> redundant. Will look into optimization from this approach. There is yet >> another polarity based optimization, that I am yet to look into. I think we >> will have to look at encoding plus solving as the unit of comparison i.e. is >> the entire process faster or slower. >> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "sympy" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/sympy/wknFPGmTw0w/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> [email protected]. >> >> To post to this group, send email to [email protected]. >> Visit this group at http://groups.google.com/group/sympy. >> For more options, visit https://groups.google.com/groups/opt_out. > > > -- > You received this message because you are subscribed to the Google Groups > "sympy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at http://groups.google.com/group/sympy. > For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/sympy. For more options, visit https://groups.google.com/groups/opt_out.
