Yeah, that's where the misunderstanding is... "low level input" is too fuzzy a concept.
I don't know if this is the accepted mainstream definition of embodiment, but this is how I see it. The thing that distinguishes an embodied agent from an unembodied one is whether the agent is given pre-structured input - that is, whether information outside the agent is directly available to the agent. A fully embodied agent does not have any access at all to its environment. It only has access to the outputs of its sensory apparatus. Obviously animal nervous systems are the inspiration here. For example, we have thermo-receptors in our skin that fire at different rates depending on the temperature. The interesting thing to note is that these receptors can be stimulated by things other than temperature, like the capsaicin in hot peppers. The reason that's important is because our experience of hotness is present only to the extent that our thermo-receptors fire, without regard to how they're stimulated. Likewise for the patterns we see when we rub our eyes for long enough - we're using physical pressure to stimulate photo-receptors. What all that reveals is that there is a boundary between the environment and the agent, and at that boundary, information does not cross. The interaction between the environment and sensory apparatus results in *perturbations* in the agent. The agent constructs its models based solely on the perturbations on its sensory apparatus. It doesn't know what the environment is and in fact has no access to it whatsoever. This is a key idea behind autopoiesis (http://en.wikipedia.org/wiki/Autopoiesis), which is a way to characterize the difference between living and non-living systems. So all text-based I/O fails this test of embodiment because the agent is not structuring the input. That modality is based on the premise that you can directly import knowledge into the agent, and that is an unembodied approach. Terren --- On Fri, 8/22/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Fri, Aug 22, 2008 at 5:49 PM, Vladimir Nesov > <[EMAIL PROTECTED]> wrote: > > On Fri, Aug 22, 2008 at 5:35 PM, Terren Suydam > <[EMAIL PROTECTED]> wrote: > >> > >> She's not asking about the kind of embodiment, > she's asking what's the use of a non-embodied AGI. > Your quotation, dealing as it does with low-level input, is > about embodied AGI. > >> > > > > I believe "non-embodied" meant to refer to > I/O fundamentally different > > from our own (especially considering a context of > previous message in > > this conversation). What is a non-embodied AGI? AGI > that doesn't > > exist? > > > > On second thought, maybe the term "low-level > input" was confusing. I > include things like text-only terminal or 3D vector > graphics input or > Internet connection or whatever other kind of interaction > with the > world in this concept. Low-level is relative to a model in > the mind, > it is a point where non-mind environment directly interacts > with the > model, on which additional levels of representation are > grown within > the mind, making that transition point the lowest level. I > didn't mean > to imply that input needs to be something like a noisy > video stream or > sense of touch (although I suspect it'll be helpful > developmentally). > > -- > Vladimir Nesov > [EMAIL PROTECTED] > http://causalityrelay.wordpress.com/ > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
