Eric> Conversely, if you produce a large unconstrained program that solves
a number of kitchen tasks, there is no reason why it exploits
underlying structure, so it will not generalize to any new tasks it
had not been programmed on. Typically, it will more or less consist
of a collection of independent programmed routines, and you may
further optimize those routines so they run faster or what not,
but if you present it with some new problem, it won't know how to
solve it, and will generically behave in a clueless way.
James: This is similar to my beliefs, but Im afraid this wil have to be built up over a long amount of examples and "living" time for an AI.
I believe that the first AI's will have many of these "independent routines" (not programmed, maybe learned) and it will have to have the ability to look at the routines and take advantage of the similarities between them. It will have to have this ability to generalize solutions to novel problems.
Ex: a walk() function and run() function share many similar inputs, steps, and outcomes, so an AI should be able to find those relationships, and easily work out a jog() function that is similar.
But again the first AI's will be big and bulky, I think they intentially should be so, and may be so for a long time, in order to represent the many many small nuances of things, before it can do a compression with a good deal of reliability.
James Ratcliff
Eric Baum <[EMAIL PROTECTED]> wrote:
James> Here is a nice definition found from:
James> http://www.cs.dartmouth.edu/~brd/Teaching/AI/Lectures/Summaries/natlang.html
James> To understand something is to transform it from one
James> representation into another, where this latter representation
James> is chosen to correspond to a set of available actions that
James> could be performed, and for which a mapping is designed so that
James> for each event an appropriate action will be performed.
James> My problem with "Compression/Compactness is Key to AI" or
James> Occams razor is that it should be the upper bounding
James> restriction, and not any base premise or Key to intelligence.
I am not claiming that compression is equivalent to understanding, or
even essential to understanding. In fact, if you read chapter 6, I am
not even claiming that evolution produced the most compact
representation possible. (It exploited constraints of early stopping,
and evolvability, as well as compactness.)
I am claiming that if you find a constrained enough program, and the
simplest way to think about constrained is, constrained by being
incredibly concise, but its not the only way,
that solves a very large number of naturally presented problems,
the only way that can happen is if the code exploits underlying
structure in the world presenting the problems,
in which case it will continue solving new problems presented by the
world that it hadn't seen before, and can be considered to understand.
I am further claiming that this is the basic underlying phenomenon
giving rise to human understanding, and that most likely there is no
other route to understanding.
James> Using this principle we can make a "Better" intelligence, but
James> NOT an intelligence, correct?
James> IE examples, if a procedure to boil an egg or such has 10 steps
James> in it, but we have another that has 8 and they both achieve the
James> goal correctly then we go with the shorter one.
I am not claiming that. I am claiming that if you fix a language,
and then find an incredibly concise program that does a huge variety
of tasks connected with the kitchen, such as cooking great Chinese
food, and great Italian food, and great Indian food, and cleaning the
dishes, and shopping for the food, and planning pleasing menues,
and what have you, and this code is so constrained or so concise
that it could not possibly have these varied tasks separately
programmed in, the only way that could happen is if the code has
developed an abstraction hierarchy that reuses various modules
that exploit various kinds of real world structure, and it will then
flexibly be able to solve new problems that it hadn't been trained on.
If you ask it to cook you a dinner, and some of the ingredients it
needs for its recipes are missing, it will improvise and create
something great.
James> But that
James> sidesteps the entire problem of AI seeking, in that we have to
James> Find/Learn both of those procedures First, and then the rest is
James> simply optimization.
James> The same thought goes for Slow or Large AI's. They are both
James> fine, once we show we can do one that is slow or is huge, then
James> we can optimize it down to make it do it in real time space
James> constraints.
Conversely, if you produce a large unconstrained program that solves
a number of kitchen tasks, there is no reason why it exploits
underlying structure, so it will not generalize to any new tasks it
had not been programmed on. Typically, it will more or less consist
of a collection of independent programmed routines, and you may
further optimize those routines so they run faster or what not,
but if you present it with some new problem, it won't know how to
solve it, and will generically behave in a clueless way.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php
Everyone is raving about the all-new Yahoo! Mail beta.
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303
