Jim,

Two things.

1) If the method I have suggested works for the most simple case, it is
quite straight forward to add complexity and then ask, how do I solve it
now. If you can't solve that case, there is no way in hell you will solve
the full AGI problem. This is how I intend to figure out how to solve such a
massive problem. You cannot tackle the whole thing all at once. I've tried
it and it doesn't work because you can't focus on anything. It is like a
Rubik's cube. You turn one piece to get the color orange in place, but at
the same time you are screwing up the other colors. Now imagine that times
1000. You simply can't do it. So, you start with a simple demonstration of
the difficulties and show how to solve a small puzzle, such as a Rubik's
cube with 4 little cubes to a side instead of 6. Then you can show how to
solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
solve the whole problem because by the time you're done, you have a complete
understanding of what is going on and how to go about solving it.

2) I haven't mentioned a method for matching expected behavior to
observations and bypassing the default algorithms, but I have figured out
quite a lot about how to do it. I'll give you an example from my own notes
below. What I've realized is that the AI creates *expectations* (again).
When those expectations are matched, the AI does not do its default
processing and analysis. It doesn't do the default matching that it normally
does when it has no other knowledge. It starts with an existing hypothesis.
When unexpected observations or inconsistencies occur, then the AI will have
a *reason* or *cue* (these words again... very important concepts) to look
for a better hypothesis. Only then, should it look for another hypothesis.

My notes:
How does the ai learn and figure out how to explain complex unforseen
behaviors that are not preprogrammable. For example the situation above
regarding two windows. How does it learn the following knowledge: the
notepad icon opens a new notepad window and that two windows can exist...
not just one window that changes. the bar with the notepad icon represenants
an instance. the bar at the bottom with numbers on it represents multiple
instances of the same window and if you click on it it shows you
representative bars for each window.

 How do we add and combine this complex behavior learning, explanation,
recognition and understanding into our system?

 Answer: The way that such things are learned is by making observations,
learning patterns and then connecting the patterns in a way that is
consistent, explanatory and likely.

Example: Clicking the notepad icon causes a notepad window to appear with no
content. If we previously had a notepad window open, it may seem like
clicking the icon just clears the content by the instance is the same. But,
this cannot be the case because if we click the icon when no notepad window
previously existed, it will be blank. based on these two experiences we can
construct an explanatory hypothesis such that: clicking the icon simply
opens a blank window. We also get evidence for this conclusion when we see
the two windows side by side. If we see the old window with the content
still intact we will realize that clicking the icon did not seem to have
cleared it.

Dave


On Sun, Jun 27, 2010 at 12:39 PM, Jim Bromer <[email protected]> wrote:

> On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner 
> <[email protected]>wrote:
>
>>  Jim :This illustrates one of the things wrong with the
>> dreary instantiations of the prevailing mind set of a group.  It is only a
>> matter of time until you discover (through experiment) how absurd it is to
>> celebrate the triumph of an overly simplistic solution to a problem that is,
>> by its very potential, full of possibilities]
>>
>> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
>> - narrow AI.  Looking for the one right prediction/ explanation is narrow
>> AI. Being able to generate more and more possible explanations, wh. could
>> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
>> is creative, polyform thinking. Or, if you prefer, it's convergent vs
>> divergent thinking, the difference between wh. still seems to escape Dave &
>> Ben & most AGI-ers.
>>
>
> Well, I agree with what (I think) Mike was trying to get at, except that I
> understood that Ben, Hutter and especially David were not only talking about
> prediction as a specification of a single prediction when many possible
> predictions (ie expectations) were appropriate for consideration.
>
> For some reason none of you seem to ever talk about methods that could be
> used to react to a situation with the flexibility to integrate the
> recognition of different combinations of familiar events and to classify
> unusual events so they could be interpreted as more familiar *kinds* of
> events or as novel forms of events which might be then be integrated.  For
> me, that seems to be one of the unsolved problems.  Being able to say that
> the squares move to the right in unison is a better description than saying
> the squares are dancing the irish jig is not really cutting edge.
>
> As far as David's comment that he was only dealing with the "core issues,"
> I am sorry but you were not dealing with the core issues of contemporary AGI
> programming.  You were dealing with a primitive problem that has been
> considered for many years, but it is not a core research issue.  Yes we have
> to work with simple examples to explain what we are talking about, but there
> is a difference between an abstract problem that may be central to
> your recent work and a core research issue that hasn't really been solved.
>
> The entire problem of dealing with complicated situations is that these
> narrow AI methods haven't really worked.  That is the core issue.
>
> Jim Bromer
>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to