Even with the variations you mention, I remain highly confident this is not
a difficult problem for narrow-AI machine learning methods

-- Ben G

On Sun, Jun 27, 2010 at 6:24 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>  I think you're thinking of a plodding limited-movement classic Pong line.
>
> I'm thinking of a line that can like a human player move with varying
> speed and pauses to more or less any part of its court to hit the ball, and
> then hit it with varying speed to more or less any part of the opposite
> court. I think you'll find that bumps up the variables if not
> unknowns massively.
>
>  Plus just about every shot exchange presents you with dilemmas of how to
> place your shot and then move in anticipation of your opponent's return .
>
> Remember the object here is to present a would-be AGI with a simple but
> *unpredictable* object to deal with, reflecting the realities of there being
> a great many such objects in the real world - as distinct from Dave's all
> too predictable objects.
>
> The possible weakness of this pong example is that there might at some
> point cease to be unknowns, as there always are in real world situations,
> incl tennis. One could always introduce them if necessary - allowing say
> creative spins on the ball.
>
> But I doubt that it will be necessary here for the purposes of anyone like
> Dave -  and v. offhand and with no doubt extreme license this strikes me as
> not a million miles from a hyper version of the TSP problem, where the towns
> can move around, and you can't be sure whether they'll be there when you
> arrive.  Or is there an "obviously true" solution for that problem too?
> [Very convenient these obviously true solutions].
>
>
>  *From:* Jim Bromer <jimbro...@gmail.com>
> *Sent:* Sunday, June 27, 2010 8:53 PM
> *To:* agi <agi@v2.listbox.com>
> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>
> Ben:  I'm quite sure a simple narrow AI system could be constructed to beat
> humans at Pong ;p
> Mike: Well, Ben, I'm glad you're "quite sure" because you haven't given a
> single reason why.
>
> Although Ben would have to give us an actual example (of a pong program
> that could beat humans at Pong) just to make sure that it is
> not that difficult a task, it seems like such an obviously true statement
> that there is almost no incentive for anyone to try it.  However, there are
> chess programs that can beat the majority of people who play chess without
> outside assistance.
> Jim Bromer
>
> On Sun, Jun 27, 2010 at 3:43 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:
>
>>  Well, Ben, I'm glad you're "quite sure" because you haven't given a
>> single reason why. Clearly you should be Number One advisor on every
>> Olympic team, because you've cracked the AGI problem of how to deal with
>> opponents that can move (whether themselves or balls) in multiple,
>> unpredictable directions, that is at the centre of just about every field
>> and court sport.
>>
>> I think if you actually analyse it, you'll find that you can't predict and
>> prepare for  the presumably at least 50 to 100 spots on a table tennis
>> board/ tennis court that your opponent can hit the ball to, let
>> alone for how he will play subsequent 10 to 20 shot rallies   - and you
>> can't devise a deterministic program to play here. These are true,
>> multiple-/poly-solution problems rather than the single solution ones you
>> are familiar with.
>>
>> That's why all of these sports have normally hundreds of different
>> competing philosophies and strategies, - and people continually can and do
>> come up with new approaches and styles of play to the sports overall - there
>> are endless possibilities.
>>
>> I suspect you may not play these sports, because one factor you've
>> obviously ignored (although I stressed it) is not just the complexity
>> but that in sports players can and do change their strategies - and that
>> would have to be a given in our computer game. In real world activities,
>> you're normally *supposed* to act unpredictably at least some of the time.
>> It's a fundamental subgoal.
>>
>> In sport, as in investment, "past performance is not a [sure] guide to
>> future performance" - companies and markets may not continue to behave as
>> they did in the past -  so that alone buggers any narrow AI predictive
>> approach.
>>
>> P.S. But the most basic reality of these sports is that you can't cover
>> every shot or move your opponent may make, and that gives rise to a
>> continuing stream of genuine dilemmas . For example, you have just returned
>> a ball from the extreme, far left of your court - do you now start moving
>> rapidly towards the centre of the court so that you will be prepared to
>> cover a ball to the extreme, near right side - or do you move more slowly?
>> If you don't move rapidly, you won't be able to cover that ball if it comes.
>> But if you do move rapidly, your opponent can play the ball back to the
>> extreme left and catch you out.
>>
>> It's a genuine dilemma and gamble - just like deciding whether to invest
>> in shares. And competitive sports are built on such dilemmas.
>>
>> Welcome to the real world of AGI problems. You should get to know it.
>>
>> And as this example (and my rock wall problem) indicate, these problems
>> can be as simple and accessible as fairly easy narrow AI problems.
>>  *From:* Ben Goertzel <b...@goertzel.org>
>> *Sent:* Sunday, June 27, 2010 7:33 PM
>>   *To:* agi <agi@v2.listbox.com>
>> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>>
>>
>> That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
>> AI system could be constructed to beat humans at Pong ;p ... without
>> teaching us much of anything about intelligence...
>>
>> Very likely a narrow-AI machine learning system could *learn* by
>> experience to beat humans at Pong ... also without teaching us much
>> of anything about intelligence...
>>
>> Pong is almost surely a "toy domain" ...
>>
>> ben g
>>
>> On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner 
>> <tint...@blueyonder.co.uk>wrote:
>>
>>>  Try ping-pong -  as per the computer game. Just a line (/bat) and a
>>> square(/ball) representing your opponent - and you have a line(/bat) to play
>>> against them
>>>
>>> Now you've got a relatively simple true AGI visual problem - because if
>>> the opponent returns the ball somewhat as a real human AGI does,  (without
>>> the complexities of spin etc just presumably repeatedly changing the
>>> direction (and perhaps the speed)  of the returned ball) - then you have a
>>> fundamentally *unpredictable* object.
>>>
>>> How will your program learn to play that opponent - bearing in mind that
>>> the opponent is likely to keep changing and even evolving strategy? Your
>>> approach will have to be fundamentally different from how a program learns
>>> to play a board game, where all the possibilities are predictable. In the
>>> real world, "past performance is not a [sure] guide to future performance".
>>> Bayes doesn't apply.
>>>
>>> That's the real issue here -  it's not one of simplicity/complexity -
>>> it's that  your chosen worlds all consist of objects that are predictable,
>>> because they behave consistently, are shaped consistently, and come in
>>> consistent, closed sets - and  can only basically behave in one way at any
>>> given point. AGI is about dealing with the real world of objects that are
>>> unpredictable because they behave inconsistently,even contradictorily, are
>>> shaped inconsistently and come in inconsistent, open sets - and can behave
>>> in multi-/poly-ways at any given point. These differences apply at all
>>> levels from the most complex to the simplest.
>>>
>>> Dealing with consistent (and regular) objects is no preparation for
>>> dealing with inconsistent, irregular objects.It's a fundamental error
>>>
>>> Real AGI animals and humans were clearly designed to deal with a world of
>>> objects that have some consistencies but overall are inconsistent, irregular
>>> and come in open sets. The perfect regularities and consistencies of
>>> geometrical figures and mechanical motion (and boxes moving across a screen)
>>> were only invented very recently.
>>>
>>>
>>>
>>>  *From:* David Jones <davidher...@gmail.com>
>>> *Sent:* Sunday, June 27, 2010 5:57 PM
>>>  *To:* agi <agi@v2.listbox.com>
>>> *Subject:* Re: [agi] Huge Progress on the Core of AGI
>>>
>>> Jim,
>>>
>>> Two things.
>>>
>>> 1) If the method I have suggested works for the most simple case, it is
>>> quite straight forward to add complexity and then ask, how do I solve it
>>> now. If you can't solve that case, there is no way in hell you will solve
>>> the full AGI problem. This is how I intend to figure out how to solve such a
>>> massive problem. You cannot tackle the whole thing all at once. I've tried
>>> it and it doesn't work because you can't focus on anything. It is like a
>>> Rubik's cube. You turn one piece to get the color orange in place, but at
>>> the same time you are screwing up the other colors. Now imagine that times
>>> 1000. You simply can't do it. So, you start with a simple demonstration of
>>> the difficulties and show how to solve a small puzzle, such as a Rubik's
>>> cube with 4 little cubes to a side instead of 6. Then you can show how to
>>> solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
>>> solve the whole problem because by the time you're done, you have a complete
>>> understanding of what is going on and how to go about solving it.
>>>
>>> 2) I haven't mentioned a method for matching expected behavior to
>>> observations and bypassing the default algorithms, but I have figured out
>>> quite a lot about how to do it. I'll give you an example from my own notes
>>> below. What I've realized is that the AI creates *expectations* (again).
>>> When those expectations are matched, the AI does not do its default
>>> processing and analysis. It doesn't do the default matching that it normally
>>> does when it has no other knowledge. It starts with an existing hypothesis.
>>> When unexpected observations or inconsistencies occur, then the AI will have
>>> a *reason* or *cue* (these words again... very important concepts) to look
>>> for a better hypothesis. Only then, should it look for another hypothesis.
>>>
>>> My notes:
>>> How does the ai learn and figure out how to explain complex unforseen
>>> behaviors that are not preprogrammable. For example the situation above
>>> regarding two windows. How does it learn the following knowledge: the
>>> notepad icon opens a new notepad window and that two windows can exist...
>>> not just one window that changes. the bar with the notepad icon represenants
>>> an instance. the bar at the bottom with numbers on it represents multiple
>>> instances of the same window and if you click on it it shows you
>>> representative bars for each window.
>>>
>>>  How do we add and combine this complex behavior learning, explanation,
>>> recognition and understanding into our system?
>>>
>>>  Answer: The way that such things are learned is by making observations,
>>> learning patterns and then connecting the patterns in a way that is
>>> consistent, explanatory and likely.
>>>
>>> Example: Clicking the notepad icon causes a notepad window to appear with
>>> no content. If we previously had a notepad window open, it may seem like
>>> clicking the icon just clears the content by the instance is the same. But,
>>> this cannot be the case because if we click the icon when no notepad window
>>> previously existed, it will be blank. based on these two experiences we can
>>> construct an explanatory hypothesis such that: clicking the icon simply
>>> opens a blank window. We also get evidence for this conclusion when we see
>>> the two windows side by side. If we see the old window with the content
>>> still intact we will realize that clicking the icon did not seem to have
>>> cleared it.
>>>
>>> Dave
>>>
>>>
>>> On Sun, Jun 27, 2010 at 12:39 PM, Jim Bromer <jimbro...@gmail.com>wrote:
>>>
>>>>  On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner <
>>>> tint...@blueyonder.co.uk> wrote:
>>>>
>>>>>  Jim :This illustrates one of the things wrong with the
>>>>> dreary instantiations of the prevailing mind set of a group.  It is only a
>>>>> matter of time until you discover (through experiment) how absurd it is to
>>>>> celebrate the triumph of an overly simplistic solution to a problem that 
>>>>> is,
>>>>> by its very potential, full of possibilities]
>>>>>
>>>>> To put it more succinctly, Dave & Ben & Hutter are doing the wrong
>>>>> subject - narrow AI.  Looking for the one right prediction/ explanation is
>>>>> narrow AI. Being able to generate more and more possible explanations, wh.
>>>>> could all be valid,  is AGI.  The former is rational, uniform thinking. 
>>>>> The
>>>>> latter is creative, polyform thinking. Or, if you prefer, it's convergent 
>>>>> vs
>>>>> divergent thinking, the difference between wh. still seems to escape Dave 
>>>>> &
>>>>> Ben & most AGI-ers.
>>>>>
>>>>
>>>> Well, I agree with what (I think) Mike was trying to get at, except that
>>>> I understood that Ben, Hutter and especially David were not only talking
>>>> about prediction as a specification of a single prediction when many
>>>> possible predictions (ie expectations) were appropriate for consideration.
>>>>
>>>> For some reason none of you seem to ever talk about methods that could
>>>> be used to react to a situation with the flexibility to integrate the
>>>> recognition of different combinations of familiar events and to classify
>>>> unusual events so they could be interpreted as more familiar *kinds* of
>>>> events or as novel forms of events which might be then be integrated.  For
>>>> me, that seems to be one of the unsolved problems.  Being able to say that
>>>> the squares move to the right in unison is a better description than saying
>>>> the squares are dancing the irish jig is not really cutting edge.
>>>>
>>>> As far as David's comment that he was only dealing with the "core
>>>> issues," I am sorry but you were not dealing with the core issues of
>>>> contemporary AGI programming.  You were dealing with a primitive problem
>>>> that has been considered for many years, but it is not a core research
>>>> issue.  Yes we have to work with simple examples to explain what we are
>>>> talking about, but there is a difference between an abstract problem that
>>>> may be central to your recent work and a core research issue that hasn't
>>>> really been solved.
>>>>
>>>> The entire problem of dealing with complicated situations is that these
>>>> narrow AI methods haven't really worked.  That is the core issue.
>>>>
>>>> Jim Bromer
>>>>
>>>>
>>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>>> <http://www.listbox.com/>
>>>>
>>>
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription 
>>> <http://www.listbox.com/>
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> CTO, Genescient Corp
>> Vice Chairman, Humanity+
>> Advisor, Singularity University and Singularity Institute
>> External Research Professor, Xiamen University, China
>> b...@goertzel.org
>>
>> "
>> “When nothing seems to help, I go look at a stonecutter hammering away at
>> his rock, perhaps a hundred times without as much as a crack showing in it.
>> Yet at the hundred and first blow it will split in two, and I know it was
>> not that blow that did it, but all that had gone before.”
>>
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

"
“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to