There aren't that many basic elements across all videogames, but so what?

The videogame test as posed, is still a pretty good test of the ability to
learn, generalize and adapt across a wide variety of contexts.....  And
yeah, to create -- plenty of games require lots of creativity...

I suppose that most video games involve some notion of space and movement
therein; and all involve time, and some notion of maximizing score.  So
there are basic elements.  But having minimal common basic elements among
games doesn't make it a bad AGI test, just a hard AGI test...

This is not a direction I'm particularly interested in for my own research;
I just put it forth as a response to your challenge to pose a purely
digital AGI test...

If a virtual world like Second Life were rich enough, one could use that
for AGI testing, but in practice those virtual worlds are still too empty
and monotonous to be fully suitable, I think....

ben g


On Sat, Jun 9, 2012 at 5:49 PM, Mike Tintner <[email protected]>wrote:

> **
> Yeah, but what I don't see are what you or he are proposing as the
> generalizable elements of games, which might be concepts of "moves" and
> "pieces" in board games and... I dunno what in physical games involving
> controllers.
>
> In robotic embodied activities, there are obvious generalizable,
> reconfigurable elements. In navigating terrains, presumably the robot must
> be able to keep varying the way its wheels or whatever tread the ground,
> and the way it balances its body. . IN handling objects, it has to keep
> varying its grip.
>
> What are the comparable constant basic elements for videogames?
>
>  *From:* Ben Goertzel <[email protected]>
> *Sent:* Saturday, June 09, 2012 10:19 PM
> *To:* AGI <[email protected]>
> *Subject:* Re: [agi] The 2 Tests of AGI - generalizability & creativity
>
>
> In the test as Sam Adams suggested it, all video games would be
> included.   The AI's actuation would involve sending signals to the
> computer or game console, similar to the ones sent by a game controller
> (e.g. from a GameCube or PS3) controller, and the AI's sensory input would
> consist of what is observed on the screen and comes out of the
> speakers....  This would not be hard to set up, there are plenty of
> software emulators for game controllers out there...
>
> So, we wouldn't require the AI to actually push the controller's buttons;
> for each controller we would give it the ability to send a signal
> corresponding to each possible button-push ...
>
> For a controller like the Wiimote this would be slightly trickier, but
> still do-able...
>
> So we are ignoring the robotics problem of figuring out how to use each
> physical controller, but no other aspect of gameplay...
>
> One could also of course make a "video game playing robot" test, which
> would have a limited robotics aspect... but I think the purely software
> version suffices as an OK AGI test...
>
> ben
>
> On Sat, Jun 9, 2012 at 5:09 PM, Mike Tintner <[email protected]>wrote:
>
>> **
>> Yes, I guess that's in principle a test of AGI  (& that's why it's worth
>> discussing this subject at length).
>>
>> I don't think in practice it will be anything but an extremely far
>> distant AGI *robotic* project - although I haven't thought this through yet.
>>
>> There are two major problems here.
>>
>> First, most videogames are played by humans and depend on a human
>> embodied player.
>>
>> Ok, so you forget about those, and concentrate on games that can play by
>> themselves without human intervention - like chess, draughts, etc.
>>
>> Your problem then is presumably - off the top of my head - to define a
>> generalizable concept of "move" and "piece", that will enable your
>> computational would-be AGI to absorb entirely new games,with new, diverse
>> kinds of  moves and pieces.  Again, they should be any games that are
>> comparable to chess -  presumably board games with pieces that move?
>>
>> I doubt that those generalizable concepts are possible.  My initial
>> hypothesis is that too many different kinds of moves are possible.
>>
>> But by all means take this further.
>>
>> P.S. I do think that you could have sub-AGI programs that creatively
>> explore many different lines of movement towards goals and present them to
>> humans for final judgment (although I don't think they apply here to games).
>>
>>
>>  *From:* Ben Goertzel <[email protected]>
>> *Sent:* Saturday, June 09, 2012 9:49 PM
>> *To:* AGI <[email protected]>
>> *Subject:* Re: [agi] The 2 Tests of AGI - generalizability & creativity
>>
>> Mike T,
>>
>> A fairly decent purely computational AGI test (suggested by Sam Adams
>> from IBM) would be the Video Game Test
>>
>> --- Learn to play a series of randomly chosen human videogames, based
>> only on interaction with the games themselves, and the goal of maximizing
>> score (or whatever the goal of each game is)
>>
>> Of course, this only works if the AGI designer has looked at, say, 1% or
>> fewer of existing videogames when building his system.   It doesn't work if
>> the AGI designer has hired 1000000 programmers to write game-playing AIs
>> separately for each game, and then wired them together
>>
>> This is not a perfect test, but success on it would be rather
>> compelling...
>>
>> We briefly discussed this along with the Woz coffee test and others in
>> our article on AGI in the recent issue of AI Magazine...
>>
>> ... ben g
>>
>>
>> On Sat, Jun 9, 2012 at 3:04 PM, Mike Tintner <[email protected]>wrote:
>>
>>> **
>>> Sergio,
>>>
>>> The Woz test as I indicated to Bob is indeed extremely complicated. I
>>> used it only because it's already out there - and is therefore helpful as a
>>> *loose* guide/image.
>>>
>>> The other isn't really Ben's - it's the basic fetch test a dog faces -
>>> he must (and will) fetch a ball thrown by his master in more or less any
>>> field -
>>>
>>> this basically means he must (and will) negotiate more or less any
>>> unfamiliar terrain (within loose limits) -
>>>
>>> he can create and negotiate a course across terrains of grassy clumps,
>>> rocky ground, sandy beach,  furniture and furnishings in a building et al -
>>> all of which will spring surprises
>>>
>>> also, of course, the ball could end up hidden from view in different
>>> ways and situations
>>>
>>> there's no way the dog could be specifically preprogrammed for every new
>>> terrain and hidden ball...(nor, by extension, is there any complex
>>> "set"  that can infer the features of every new terrain)
>>>
>>> if your robot can simply negotiate new .terrain after new terrain
>>> somewhat like a dog (or all other life forms) and not even fetch a ball -
>>> it's AGI
>>>
>>> If we were talking a relatively simple practical starting-point, I would
>>> suggest aiming for a robot that could negotiate just a few metres of
>>> endlessly diverse terrains (wh. is more or less what roboticists are
>>> attempting now, although I'll bet they all still cheat)..
>>>
>>> P.S. I don't think a purely computational AGI project is possible. Once
>>> you think in depth about the goals of generalizability and creativity, you
>>> will realise they depend on being implemented by a body with an extensive
>>> range/spectrum of different lines of movement and observation.  The body is
>>> the foundation of generality and creativity - it affords the capacity to
>>> always try out new lines of movement and looking, and handle objects and
>>> negotiate terrains in new ways.
>>>
>>> By all means try to outline a project that contradicts me. It will be
>>> interesting regardless.
>>>
>>>
>>>
>>> *From:* Sergio Pissanetzky <[email protected]>
>>>  *Sent:* Saturday, June 09, 2012 7:27 PM
>>> *To:* AGI <[email protected]>
>>> *Subject:* RE: [agi] The 2 Tests of AGI - generalizability & creativity
>>>
>>>  Mike, ****
>>>
>>> ****
>>>
>>> I like the concept of the Woz test. However, the test itself has three
>>> problems. It is unfair to those who do not build robots, and it requires
>>> the ability to recognize images, which is in itself a major test. The third
>>> problem, it requires considerable computer power, besides generalizability
>>> and creativity. It would be unfair to those who may have a good idea but
>>> lack the necessary power, such as me. Do you think it can be rephrased so
>>> as to eliminate these limitations?****
>>>
>>> ****
>>>
>>> Can you please explain what is Ben's fetch test?****
>>>
>>> ****
>>>
>>> Sergio****
>>>
>>> ****
>>>
>>> *From:* Mike Tintner [mailto:[email protected]]
>>> *Sent:* Saturday, June 09, 2012 5:37 AM
>>> *To:* AGI
>>> *Subject:* [agi] Re: The 2 Tests of AGI - generalizability & creativity*
>>> ***
>>>
>>> ****
>>>
>>> P.S. The Woz Test {"go and make a cup of coffee in this new kitchen")
>>>  is a test of creativity - of being able to design a course of action
>>> without specific programming.****
>>>
>>> ****
>>>
>>> But (correct me) it isn't defined as a test of creativity - and should
>>> be.****
>>>
>>> ****
>>>
>>> Note: there is a great deal of underlying unanimity here  - in the Woz
>>> Test, Ben's fetch test and similar - but the basic principles involved
>>> (generalizability and creativity) haven't been clearly spelled out.****
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>|
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription****
>>>
>>> <http://www.listbox.com>****
>>>
>>> ****
>>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription 
>>> <http://www.listbox.com>
>>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> http://goertzel.org
>>
>> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>>
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription 
>> <http://www.listbox.com>
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to