It's a good point. It seems Crutchfield is using the idea of a computational machine - the parts necessary to construct a model of the environment - to define innovation. The innovation is a leap from one model class to another ... from one type of machine to another.
If one defines the computational machine for a jump-shot as a model of how to get the ball in the basket, then the hook shot is a leap to a new model class, fundamentally different from the "jump-shot" model class. The person who developed the hook shot has developed a new model that performs better under a certain environment. If this is correct, then how narrowly he defines a model class would determine how broadly he defines "new." I think. -Ted On Sun, Nov 1, 2009 at 9:54 PM, Nicholas Thompson < [email protected]> wrote: > Well, to the extent that this is a discussion of crutchfield, I dont see > how the hook shot would be something new in his terms. He seems to mean > something quite narrow by "new" and it seems to have something to do with a > new type of computational "machine". Since "computational machine" is an > intuitional black hole for me, I cannot say whether jump shot is a new sort > of computational machine or not, but I am inclined to doubt it. > > Nick > > Nicholas S. Thompson > Emeritus Professor of Psychology and Ethology, > Clark University ([email protected]) > http://home.earthlink.net/~nickthompson/naturaldesigns/ > > > > > > ----- Original Message ----- > *From:* Ted Carmichael <[email protected]> > *To: *The Friday Morning Applied Complexity Coffee Group<[email protected]> > *Sent:* 11/1/2009 6:54:44 PM > *Subject:* Re: [FRIAM] Crutchfield 's "Is anything ever new?" > > I'm actually fine with re-defining 'scale' to mean something along the > lines of the amount of error in the mapping. That is mostly, I think, what > I was trying to say. Let me see if I can clarify my points a little. > > There is definitely a large number of differences between two people using > the same method to shoot a basket. All the things you mentioned - eye > movement, exact combination of muscles, etc. I was trying to say that this > is a different scale (a wider range of error, perhaps) when compared to two > shooters using different methods ... e.g., one person shoots in the > traditional way and one person makes a 'granny shot.' > > I agree that two people using the same method is an illusion. But it is a > useful illusion, when differentiating between the traditional method and the > granny method. Similarly, when Kareem Abdul-Jabbar used the hook shot, it > was an innovative (hence: new) method for the NBA. In this way I would say > there are different levels of abstraction available ... one simply picks the > level of abstraction that is useful for analysis. > > I tried to use the mathematical example of calculating a product to > illustrate this same idea. When calculating 49 * 12, one might use the > common method of using first the one's column, then the ten's column, and > adding the results, etc. Another person may invent a new method, noticing > that 49 is one less than 50, and that half of 12 is 6, and say the answer is > 600 - (12 * 1) = 588. Still another may say that 490 + 100 - 2 is the > answer. > > What is innovative about these new methods is not that they ignore the > common operations of adding, multiplying, and subtracting. It's that these > basic operations are combined in an innovative way. If Crutchfield asks: is > this really something new? I would say "yes." If he points out that all > three methods use the same old operations, I would say that doesn't matter > ... those operations are used in an innovative way; in a new combination. > > In a slightly different vein, Java is a "new" programming language even if > it is only used to implement the same old algorithms. The implementation is > new, even if the algorithm - the method - is the same. This is analogous to > two mathematicians using the same "trick" to get a product, even if the > respective neuron networks each person possesses to implement this method > are slightly different. > > I do admit the term "level" or "scope" can exhibit ambiguities. But I > still find that "level" is a useful distinction. It does imply varying > degrees of complexity, and I think that is a valid implication, even if it > is hard to nail down. > > I also find it hard to define a counter-example to the proposition that > emergent features of a system are always produced from the interactions of > elements "one level down." When we look at a marketplace, we assume the > "invisible hand" is the result of human interaction. There doesn't seem to > be much use in jumping from the level of neurons - or even worse, quarks - > straight to the marketplace. > > Of course, depending on the scope of the "market" being studied, individual > businesses and other multi-person entities may be the most basic element of > this system. There may even be entities defined as "one person" within this > system, depending on how much heterogeneity you allow between individual > elements. > > But, however you define the elements, this essentially means the same as > saying "one level down," when talking about the emergent properties of that > system. If you want to talk about the emergent properties of a corporation, > then you have redefined your system, and hence redefined your elements. > > Anyway, the larger point is that innovation happens by combining elements > in a new way, however those elements are defined. A RISK processor is > innovative in how it combines basic computer operations. Java is innovative > in the instructions sent to the processor, and the package of common tools > that comes with it. A new algorithm is innovative in how it uses these > tools at a different level of abstraction. And a software package may be > new in how it combines many existing algorithms and other elements of > visualization and human-computer interaction. > > If you don't like "levels" and prefer "layers," then I'm okay with that. > But I don't really see the distinction. Can you expand on that? > > Cheers, > > Ted > > On Sun, Nov 1, 2009 at 11:43 AM, glen e. p. ropella < > [email protected]> wrote: > >> Thus spake Ted Carmichael circa 10/30/2009 03:33 PM: >> > In response to Glen's comments, I would say that his differentiation >> between >> > thoughts and actions is also a somewhat arbitrary choice of scale. I >> agree >> > that how two people shoot a basketball is usually more easily translated >> > between them than how they calculate the product of two numbers. When I >> > shoot a basketball, I follow the same general procedure (knees bent, one >> > hand on the side of the ball and one hand behind it, etc) that other >> people >> > do. But my physical structure is still different than another person's, >> so >> > I have refined the general procedure to better match my physical >> structure. >> > (Or not, since I usually miss the basket.) >> >> Yes, you're onto something, here. But I wouldn't consider it a matter >> of general vs. specific for throwing a basketball. Any general method >> you may think exists is an illusion. Let's say you're learning how to >> do it from a coach and several fellow players. For each other person >> you watch do it, their method is particular to _them_. In such a case, >> there is no general method. You may _imagine_ some illusory general >> method in your head. But when the method is executed, it is always >> particular. >> >> Now consider the coach's _description_ or model of the method. Even in >> that case, the description, the words, the actions the coach executes >> with his mouth and hands in an attempt to communicate an idea are >> particular to him. The descriptive actions are particular to him. Even >> in that case, there is no general method. Any general method you may >> think exists is pure fiction. What matters is the particular actions. >> >> Induction is a myth. [*] >> >> It's not general vs. specific. It is abstract vs. concrete. You're >> observation of either the coach's description or your fellow players' >> methods is chock full of errors and noise. In order to cope with such >> noise and translate from their actions to your actions, you have to fill >> in the blanks. You are totally ignorant of, say, how fast to twitch >> your eyes while you're maintaining focus on the basket... or how fast to >> twitch your hand/finger muscles while holding the ball. You can't >> observe those parts of the method when watching your fellow players. >> And such information is totally absent from the coach's description. >> So, you have to make that stuff up yourself. >> >> And you make it up based on your _particular_ concrete ontogenetic >> history. And, hence, when you execute the method, it is also particular >> to you. >> >> However, because your hands, fingers, and eye muscles are almost >> identical to those of your fellow players and your coach, the method is >> transferable despite the huge HUGE _HUGE_ number of errors and amount of >> noise in your observations. >> >> > Two different people calculating a product, however, may use two totally >> > different methods. One person may even have a larger grammar for this, >> > utilizing more methods for more types of numbers than the second person. >> > (In effect, he has more of his brain dedicated to these types of tasks, >> > which give him the power to have a larger "math" grammar.) So it's >> probably >> > more precise to say: at a certain scale 'actions' can be mapped between >> two >> > people but 'thoughts' cannot be. >> >> It's less a matter of scale than it is of noise and error. When >> calculating a product (or doing any of the more _mechanical_ -- what >> used to be called "effective" -- methods), the amount of noise and error >> in the transmission from one to another is minimized to a huge extent. >> Math is transferable from person to person for precisely this reason. >> It is _formal_, syntactic. Every effort of every mathematician goes >> toward making math exact, precise, and unambiguous. >> >> So, my argument is that you may _think_ that you have different methods >> for calculating any product, and indeed, they may be slightly different. >> But the amount of variance between, say, two people adding 1+1 and two >> people throwing a basketball is huge, HUGE, _HUGE_. [grin] OK. I'll >> stop that. Because (some) math is crisp, it's easier to fill in the >> blanks after watching someone do it. >> >> Now, contrast arithmetic with, for example, coinductive proofs. While >> it's very easy to watch a fellow mathematician add numbers and then go >> add numbers yourself. It's quite difficult to demonstrate the existence >> of a corecursive set after watching another person do it. (At least in >> my own personal math-challenged context, it's difficult. ;-) You can't >> just quickly fill in the blanks unless you have a lot... and I mean a >> LOT of mathematical experience lying about in your ontogenic history. >> Typically, you have to reduce the error and noise by lots of back and >> forth... "What did you do there?" ... "Why did you do that?" ... "What's >> that mean?" Etc. >> >> Hence, it's not a matter of scale. It's a matter of the amount of >> error, noise, and ignorance in the observation of the method. And it's >> not about the transfer of the fictitious flying spaghetti monsters in >> your head. It's a matter of transferring the actions, whatever the >> symbols may mean. >> >> > If you go down to the lower level processes, all of our neurons behave >> in >> > approximately the same ways. So at this scale they can be mapped, one >> > person to another. I.e., when thinking, one of my neurons is just as >> easily >> > mapped to one of your neurons as my actions are to your similar actions. >> >> Right. But similarity at various scales is only relevant because it >> helps determine the amount of error, noise, variance, and uncertainty at >> whatever layer the abstraction (abstracted from the concrete) occurs. >> Note I said "layer", not "level". The whole concept of levels is a red >> herring and should be totally PURGED from the conversation of emergence, >> in my not so humble opinion. ;-) >> >> >> * I have what I think are strong arguments _against_ the position I'm >> taking, here. But I'm trying to present the argument in a pure form so >> that it's clear. I'm sure at some point in the future when I finally >> get a chance to pull out those arguments, someone will accuse me of >> contradicting myself. [sigh] >> >> -- >> glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com >> >> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> lectures, archives, unsubscribe, maps at http://www.friam.org >> > > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > lectures, archives, unsubscribe, maps at http://www.friam.org >
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
