On Thu, Sep 4, 2008 at 2:22 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> The paper seems to argue that embodiment applies to any system with inputs 
> and outputs, and therefore all AI systems are embodied.

No. It argues that since every system has inputs and outputs,
'embodiment', as a non-trivial notion, should be interpreted as
"taking experience into account when behaves". Therefore, traditional
symbolic AI systems, like CYC, is still disembodied.

> However, there are important differences between symbolic systems like NARS 
> and systems with external sensors such as robots and humans.

NARS, when implemented, has input/output, and therefore has external sensors.

I guess you still see NARS as using model-theoretic semantics, so you
call it "symbolic" and contrast it with system with sensors. This is
not correct --- see
http://nars.wang.googlepages.com/wang.semantics.pdf and
http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf

> The latter are analog, e.g. the light intensity of a particular point in the 
> visual field, or the position of a joint in an arm. In humans, there is a 
> tremendous amount of data reduction from the senses, from 137 million rods 
> and cones in each eye each firing up to 300 pulses per second, down to 2 bits 
> per second by the time our high level visual perceptions reach long term 
> memory.

Within a certain accuracy, 'digital' and 'analog' have no fundamental
difference. I hope you are not arguing that only analog system can be
embodied.

> AI systems have traditionally avoided this type of processing because they 
> lacked the necessary CPU power. IMHO this has resulted in biologically 
> implausible symbolic language models with only a small number of connections 
> between concepts, rather than the tens of thousands of connections per neuron.

You have made this point on "CPU power" several times, and I'm still
not convinced that the bottleneck of AI is hardware capacity. Also,
there is no reason to believe an AGI must be designed in a
"biologically plausible" way.

> Another aspect of embodiment (as the term is commonly used), is the false 
> appearance of intelligence. We associate intelligence with humans, given that 
> there are no other examples. So giving an AI a face or a robotic body modeled 
> after a human can bias people to believe there is more intelligence than is 
> actually present.

I agree with you on this point, though will not argue so in the paper
--- it is like to call the roboticists "cheating", even though it is
indeed the case that works in robotics are much easier to get public
attention.

Pei


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to