Russ Abbott wrote:
Your messages both suggest that I don't quite understand constraints. I believe I understand constraints reasonably well. I'm not sure you understand what I'm getting at with thread bombs.

My first reaction looked too strong and dismissive. Let me explain a bit my opinion, and some ideas that came from discussions with my colleagues here.

The thread bomb approach generates a thread that is triggered when any of the variables it is watching gets a value. This does not eliminate generate and test, but since in effect all constraints are tested all the time it does cut off the search immediately whenever any constraint is violated.

My conclusion of this paragraph is that logic programming with dataflow concurrency is more powerful than logic programming alone. Your examples show clearly that it makes programs simpler and more efficient.

The argument for direct constraint programming is that it skips the generate step. But when one's domain is a simple set the work done to eliminate the generate step is essentially the same as the work done to test the value once it is generated. The two are equivalent.

Constraint programming in general *does not* skip the generate test, it prunes it. Propagation removes some generated values in advance, and does so by local inference. Propagation executes in polynomial time.

That being said, an NP problem still requires generation, because propagation is not enough. The propagation only reduces the amount of search. But in many combinatorial problems, this reduction is drastic. The raw performance gain can be several orders of magnitude. This raw performance gain made constraint programming usable in the real world.

However the propagation in some examples may indeed reduce generation to nothing. Renaud De Landtsheer wrote a small solver for a Sudoku with Regin's alldiff constraint (FD.distinctD). Surprisingly, search (generation) is rarely needed to solve this problem, which suggests that this is not really a hard problem ;-)
http://www2.info.ucl.ac.be/ingidocs/people/rdl/sudoku/solvingsudoku.html

Constraints save more work when one can cut out a number of possible values in one step. To do that, one needs a domain in which the values can be reasoned about, i.e., the domain is not just a set but perhaps a domain like the integers with lots more structure. Also, and more significantly, one needs a whole new level of mechanism to talk about and reason about domains. Apparently Oz doesn't support Oz-level definition of range constraints. One must write such constraints in C++, which is not desirable.

One can write constraint propagators in Oz to some extent. The idea is to either combine existing propagators, or to "code" the domain as integers, sets, or records. For instance, colleagues have implemented constraints on graphs with Oz finite sets.

Another example is the "circuit" constraint. Consider a tuple S defining a successor mapping (S.I is the successor of I). S is a circuit if S defines a maximal cycle. An Oz implementation is

proc {Circuit S}
   N={Width S}
in
   S ::: 1#N
   {FD.distinct S}
   for I in 1..N do
      %% the K successors of J are different from I
      proc {Diff J K}
         if K>0 then S.J \=: I {Diff S.J K-1} end
      end
   in
      thread {Diff I N-1} end
   end
end

Consider the following example.

Suppose you have two variables X1 and X2. You want them both to come from some list Xs. Suppose that you want X1 to appear earlier in Xs than X2. But suppose that the program is not generating X1 and X2 directly from that specification. X1 and X2 are generated through other means, but they must satisfy that constraint.
(...)
A real constraint system would declare the ranges of X1 and X2 to be constrained to Xs and then modify those range constraint declarations whenever either becomes instantiated.

A real system should do more. A propagator should watch both X1 and X2's domains, and check their possible position in the list. It is not always necessary wait for determination to check the constraint.

> The question is: how can one do
that in Oz? How can one express range constraints and modifications to those range constraints at the Oz level? I imagine it can be done. But it will take a bit of work.

Here is how to code that constraint in Oz if Xs is a list of integers:

proc {ElementBefore Xs X1 X2}
   I1 I2
in
   [I1 I2] ::: 1#{Length Xs}
   {FD.element I1 Xs X1}     % X1 is the I1-st element of Xs
   {FD.element I2 Xs X2}     % X2 is the I2-nd element of Xs
   I1 <: I2
end

For instance, if Xs=[12 23 37 52 61 78 59 32 11 7], adding the constraints X1<40 and X2>50 removes 11 and 32 from the domain of X1. The propagator automatically exploits the "structure" of Xs (the values over 50 are between the 5th and 8th position).

A question that comes to my mind (and that I asked a few messages ago but not as fully) is why aren't these features already available at the Oz level. Oz must have much of this mechanism already built. Why must one write C++ code to get at it?

There are two answers to that question. First, quite a few combinatorial problems can be easily "coded" with integers or integer sets. Second, noone has had the time to implement it yet :-(

Note that there exists a library with constraints over reals (see XRI). It is developed by our "columbian connection" ;-) Soft constraints are also under development.


Cheers,
raph


_________________________________________________________________________________
mozart-users mailing list                               
[email protected]
http://www.mozart-oz.org/mailman/listinfo/mozart-users

Reply via email to