James Ratcliff wrote:
 Every AGI, but the truly most simple AI must run in a simulated
 environment of some sort.

Not necessarily, but in most cases yes. To give a counter example, a human scholar reads Plato and publishes an analysis of what he has read. There is no interaction with the environment in the sense I believe you mean -- there is input and output, but the two are disconnected, and the output doesn't affect the input -- yet it's clearly a human-level intellectual activity. But interacting with an environment is often more interesting.

 There must be structure to its internal information nodes, some level
 of hierarchy for storage and usage correct? Many and most nodes will
 contain base nodes such as color or weight or position. How can a
 network be created without these?  The AGI may not have direct
 experential sampling of these concepts thru an input device, but the
 concepts must still be there.

Three points: 1) The main thing I was arguing is that the base nodes do not need to be different from the rest, other than their position in the network. There is no need for the equivalent of software primitive functions. 2) A hierarchical network needs base nodes, but a graph does not. It can be circular. 3) There are no true primitive concepts in the real world. Or rather, primitives only exist within a give perspective; you can change the perspective and define the previous primitives in terms of other concepts. Color seems primitive from a vision perspective, but if you change to a physics perspective, you can talk about photons; or if you change to a cultural perspective, you can talk about it being warmth or earthy or bold, etc.

 Data sources: But there is one sense in which a system must be
 grounded to provide useful results for the real world. The
 connections between concepts, and the statistics regarding those
 connections need to come from the real world. A dictionary is
 circular, but the connections between the nodes are set based on
 corresponding connections in the external world. Or to put that
 another way, you can't reason about something you have no data about.

> This seems to contradict the second notion.

The point I was trying to make is that the internal data must be influenced by the real world, but that's different than having to have a direct, primitive connection to the world. If by "grounded" you mean influenced by the world, then yes, an AI needs to be grounded to reason about the world. I was concerned with a definition that requires direct connections from the data to the world, which is not needed.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to