Gents, I was prompted to write up the following by a discussion (argument?) I'm having with Marvin Minsky:
Enjoy, Josh --------- Programming at the Edge of Cybernetics The two major approaches to engineering the mind, cybernetics and AI, differ sharply in approach. Cybernetics was based on differential equations, where AI favors sequential, symbolic, programs. Sequential programming is much easier than differential equations. You can say exactly what happens, and when. More specifically, effects only go one way. Statement A affects the values as seen by statement B, but not vice versa. This is to some extent like the difference between a digital logic circuit and an analog one: In a digital circuit, signals only go one way on a wire, and what happens downstream can't affect what happens upstream. One reason sequential programming is so intuitive is that at the conscious level, we live in a sequential world. This was the intuition Turing captured in the Turing Machine: you do one thing after another, and what happens during the second step can't affect the first. Earlier steps are faits accomplis; causality just doesn't work backwards in time. That's how we see things and how we do things. We design physical machines as causal chains, too. The fire boils the water into steam which pushes against the piston which cranks the wheel, making the train move. But when the train hits a slope, the weight turns to torque which forms a back-pressure on the piston and the whole process slows. The "causal chain" in a locomotive is not described by a sequential program, but by an equation. It is important to separate two distinctions that I've conflated so far. One is the sequential versus equational formulation with their difference in causality flow. The other is discrete versus continuous. Although they are often linked, as in analog versus digital circuits, they do not have to be. We can build continuous circuits that exhibit one-way causality, such as meters and op-amps, and we can build digital systems that exhibit two-way causality, such as cellular automata. (Two-way in space, anyway!) The sequential program is the most unresponsive form of machine Man has yet invented. If we had built a locomotive with a sequential program, the fire would heat the water, steam would push the piston, and when the torque on the wheel was insufficient for the slope, the program would dump core or pop up an error window with a useless and insulting message. With a sequential program, you simply can't push back. What actually happens in the mind? Let's consider a Society of Mind-like agency, with a Build agent that gets blocks by activating Find, Reach, and Grasp in sequence. At this level, the sequential aspect in Build is necessary. But inside, say, Reach, there may be something more: Reach is using Look to guage the positions of the hand, and various Muscles to move it. But the Muscles are giving proprioceptive feedback as to expected position, and the eyes have to be moved to track the hand. You can think of lots of servo loops, or perhaps more simply, of a set of equational constraints being set up that tracks an equilibrium. When the hand reaches position over a block, Build switches from Reach to Grasp. Or does it? It would make more sense to run both for an overlap period, where each constrained the other. Inside Grasp, therre's an equilibrium to be found between muscle tension and skin pressure. There's probably even more interaction between Find and the earlier parts of Reach. This works well in a continuous domain, but what about a discrete, symbolic one? I think that equation-like constraints are at work there too. Consider language, which requires a series of distinct symbols. We use various forms of word agreement, devices like alliteration and consonance, continued metaphors, segues and transitions to stitch speech seemingly seamlessly (sorry!). Back in the programming world, there is a wealth of forms in which equations can be used to harness two-way causality. There are continuous differential systems, the physicist's friend. There are diophantine equations and discrete constraint systems, and various forms of linear programming. There is Backus' algebra of functions, the point of which the field of functional programming appears to have missed. At the molecular scale, there are two ways to build machines. We can either build them like macroscale machines, with parts like gears and shafts that have to be put together by robot manipulators as a macroscale machine would, or by self-assembly, having the parts float around and match up, the way a virus forms. In designing nanomachines, I came up with a hybrid notion: self-assisted assembly. (I'd bet Drexler thought of it first, but I was the first to use the phrase in print.) The idea is that you use machine-like systems, but do the finest manipulations with self-assembly. For example, if you want to put a screw into a threaded hole, you hold it closely but loosely enough that it *could* go in the hole, and it not only will go in, but screw itself in. You couldn't build anywhere near as complex a system by self-assembly, because parts would go to the wrong places; but with self-assisted assembly you can design a system substantially less complex than full precision and control would have required. Designing an entire system in equational, unrestricted-causality form is too tough for humans to do. Half of what engineers do is aimed at reducing side-effects. Designing systems in strictly discrete, logical form is hard, too. Humans like to think in abstractions at the appropriate level, that hide the complexity of the lower levels of implementation. But almost always, the abstractions don't exactly match the implementation details. Half of any significant software system is housekeeping and boilerplate and code to match formats and handle special cases. We need programming languages that allow sequential specification at the proper level, like Look, Reach, and Grasp, but which allow equational specification of constraints that smoothly move the system from one phase to the next. We need programming languages that allow us to specify what we mean in high-level terms, and define high-level terms in low-level terms, but have "self-assisted assembly" put the system together without having to do all the tedious match-up by hand. This way, the equations come in small bunches than can be comprehended by mere mortals. Discrete and mixed-mode constraint systems will be just as valuable as continuous, if not more so. The key is that they will create feedback paths in programs that the programmer didn't have to separately discover and implement. Programs, any programs, not just AIs, will be simpler and more robust. ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]