On Tue, Apr 22, 2008 at 12:18 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > At a very high level of abstraction, most the AGI (and AI for that matter) > schemes I've seen can be caricatured as follows: > > 1. Receive data from sensors. > 2. Interpret into higher-level concepts. > 3. Then a miracle occurs. > 4. Interpret high-level actions from 3 into motor commands. > 5. Send to motors. > > What's wrong with this? It implicitly assumes that data flows from 1 to 5 in > waterfall fashion, and that feedback, if any, occurs either within 3 or as a > loop thru the external world. >
Thanks Josh, it clarifies the picture a bit. I think the root of such problem lies in projecting reality on the mind, in trying to draw a clear-cut distinction between the parts of AI that need to find out things about reality, and parts that act on this reality, where in fact this distinction shouldn't be that sharp. It looks this way, because low-level perception is mainly about reality, just as low-level action can be regarded as a subgoal-generation process, and systems that we built (including disciplines of deliberative reasoning) operate this way. But facts that get extracted are selected for usefulness for action, and the process of their perception can seamlessly proceed into actuation. High-level concepts are handles for supervised training, not guardians between perception and action. -- Vladimir Nesov [EMAIL PROTECTED] ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4 Powered by Listbox: http://www.listbox.com
