James> Here is a nice definition found from: James> http://www.cs.dartmouth.edu/~brd/Teaching/AI/Lectures/Summaries/natlang.html
James> To understand something is to transform it from one James> representation into another, where this latter representation James> is chosen to correspond to a set of available actions that James> could be performed, and for which a mapping is designed so that James> for each event an appropriate action will be performed. James> My problem with "Compression/Compactness is Key to AI" or James> Occams razor is that it should be the upper bounding James> restriction, and not any base premise or Key to intelligence. I am not claiming that compression is equivalent to understanding, or even essential to understanding. In fact, if you read chapter 6, I am not even claiming that evolution produced the most compact representation possible. (It exploited constraints of early stopping, and evolvability, as well as compactness.) I am claiming that if you find a constrained enough program, and the simplest way to think about constrained is, constrained by being incredibly concise, but its not the only way, that solves a very large number of naturally presented problems, the only way that can happen is if the code exploits underlying structure in the world presenting the problems, in which case it will continue solving new problems presented by the world that it hadn't seen before, and can be considered to understand. I am further claiming that this is the basic underlying phenomenon giving rise to human understanding, and that most likely there is no other route to understanding. James> Using this principle we can make a "Better" intelligence, but James> NOT an intelligence, correct? James> IE examples, if a procedure to boil an egg or such has 10 steps James> in it, but we have another that has 8 and they both achieve the James> goal correctly then we go with the shorter one. I am not claiming that. I am claiming that if you fix a language, and then find an incredibly concise program that does a huge variety of tasks connected with the kitchen, such as cooking great Chinese food, and great Italian food, and great Indian food, and cleaning the dishes, and shopping for the food, and planning pleasing menues, and what have you, and this code is so constrained or so concise that it could not possibly have these varied tasks separately programmed in, the only way that could happen is if the code has developed an abstraction hierarchy that reuses various modules that exploit various kinds of real world structure, and it will then flexibly be able to solve new problems that it hadn't been trained on. If you ask it to cook you a dinner, and some of the ingredients it needs for its recipes are missing, it will improvise and create something great. James> But that James> sidesteps the entire problem of AI seeking, in that we have to James> Find/Learn both of those procedures First, and then the rest is James> simply optimization. James> The same thought goes for Slow or Large AI's. They are both James> fine, once we show we can do one that is slow or is huge, then James> we can optimize it down to make it do it in real time space James> constraints. Conversely, if you produce a large unconstrained program that solves a number of kitchen tasks, there is no reason why it exploits underlying structure, so it will not generalize to any new tasks it had not been programmed on. Typically, it will more or less consist of a collection of independent programmed routines, and you may further optimize those routines so they run faster or what not, but if you present it with some new problem, it won't know how to solve it, and will generically behave in a clueless way. ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
