In the example cited, I think the basic error is that it still is the human 
being that is making sense of the robot's behavior. The controlled object does 
not in fact "seek refuge" at all. The program probably measures a certain 
parameter, which is an illusion of a "battery" and then it triggers a "refuel" 
process, yet another illusion. In this scenario, the programmer is clearly the 
AI. By the same token, "boredom" would be a function of the programmer too. The 
apparent need to ascribe human emotion to machines is rather obscure. Not that 
it cannot be achieved programmatically. Just obscure.

I recall in 1991, as I encoded human-like responses to user-error messages, how 
it was being frowned upon by the IT department. It was easy to do then, but it 
was merely the programmer instructing the program to pretend to react like a 
human to input errors. In reality, neither the programmer, nor the program was 
conscious of what the user was really doing.

My contention is that the industry need to consistently and in a semantically 
disciplined practice, separate fact from fiction from programming from illusion 
from AI from smart. In this example then, if the object could locate and 
identify a data, or information construct in the database (which may have been 
inserted via visual data, bootstrap, or another machine), which it can 
interpret on its own developing logic from said knowledge base as relevant 
"fuel", and autonomously figure out how to find its way to the identified 
resource, and how to refuel itself - to replenish its own functional platform - 
then I would agree that a level of functional AGI has been achieved.


Robert Benjamin

________________________________
From: Keyvan Mir Mohammad Sadeghi <[email protected]>
Sent: 13 March 2017 01:03 AM
To: AGI
Subject: [agi] The hard problem of AGI grounding

We've all heard of the cliche, an AI feels alive and doesn't want to die:
https://youtu.be/lhoYLp8CtXI

I've actually seen a prototype, a robot in a game world taking refuge back in 
it's house because that's where the battery is.

But what happens when it's "alive" for a while, read all of Wikipedia and 
everything else on the web, lived for millions of years with all the primitive 
human-like instincts that we've given it, in the span of five minutes,

and it's bored!

How to keep it "on" after that?
AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpgecd5649.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xMHgxMC5qcGc]
 <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc>  | 
Modify<https://www.listbox.com/member/?&;> Your Subscription 
[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc]
 <http://www.listbox.com>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to