I think YKY is right on this one. There was a Dave Barry column about going to the movies with kids in which a 40-foot image of a handgun appears on the screen, at which point every mother in the theater turns to her kid and says, "Oh look, he's got a GUN!"
Communication in natural language is extremely compressed. It's a code that expresses the *difference* between the speaker's and the hearer's states of knowledge, not a full readout of the meaning. (this is why misunderstanding is so common, as witness the "intelligence" discussion here) Even a theoretical Solomonoff/Hutter AI would flounder if given a completely compressed bit-stream: it would be completely random, incompressible and unpredictable like Chaitin's Omega number. Language is a lot closer to this than is the sensory input stream of a kid. There's a quote widely attributed to a "William Martin" (anybody know who he is?): "You can't learn anything unless you almost know it already." In general, the hearer needs a world model almost the same as the speaker's. Let's call this "Winograd's Theory of Understanding": that having a model capable of simulating the domain of discourse is necessary and sufficient for understanding discourse about it. (NB: (a) there are different levels of completeness and accuracy for simulations and also for understanding; (b) "symbol grounding" in the sense of associations to physical sensory/motor signals is *not necessary*.) I find SHRDLU and its intellectual descencents a convincing demonstration of WTU. This implies that understanding an NL sentence consists not only in parsing it into an internal representation and stashing it somewhere, but, if it's something you didn't already know, modifying and augmenting the mechanism of your world model to reflect the new knowledge in future simulations. In other words, building a working mechanism and integrating it into an existing vast, complex machine. Josh On Saturday 28 April 2007 03:29, YKY (Yan King Yin) wrote: > "Layered learning" is not just better, it's actually the only > computationally feasible approach. > > We may talk to a baby like: > "MILK?" > "You want to play BALL?" > "Oh you POO-POO again" etc. > And these things are said simultaneously as some *physical* events (eg > milk, ball, poo) are happening, which allows the baby to correctly *bind* > the words to concepts, ie achieve grounding. > > Contrast this with something from Wall Street Journal: > Headline: "Employees of a new plan to get Dell back on the road to growth, > including streamlining management and looking at new methods of > distribution beyond the computer company's direct-selling model." > Can a baby really learn from THIS ^^^ ? ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
