On 09.06.2012 04:04, Jim Bromer wrote:
Because we would be testing primitive programs, they have to be
defined by the programmers.� Other AGI programmers might then comment
sympathetically to see if the -test- could be made a little more
sophisticated for the programmer's program. The program would have to
learn.� The learning could take place through direct instruction but
the program would have to be able to figure some things out for
itself.� The learning method would have to go beyond basic filling of
variable types or of basic numerical computation.


Some common problems with this kind of thing:

i) The environment is too simple.

ii) The agent-environment system reaches an equilibrium state, and then stays there. This is an "informational death" or "end of history" situation.

iii) The design of the agent-environment system intentionally or unintentionally precludes virtual machine stratification.

iv) The communications system between agents is not Turing complete, with only simple signals or gestures being possible.


To create a good test environment for AGIs there needs to be no be no single optimal or correct solution to the problem of existing in that environment. Being intelligent involves continuous learning in a dynamic environment where there are multiple possible trade-offs and strategies (aka "complex goals in complex environments").



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to