> On Fri, 2002-11-01 at 05:18, C. David Noziglia wrote:
>> 
>> Why does perception, or any other aspect of AGI, have to follow the
>> model of what natural systems (like human beings) do?
>> 
>> Could not an AGI system simply be different, built for different
>> assumptions and modeling?
> 

Ok, I'm going to have to speak up it defense of brain-modeling.  First, as
Ben hinted at, it is important to distinguish between (at least) three
different kinds of brain-modeling in AGI:

1) Features or components of a system that are loosely inspired by the way
the human brain works.  For example, artificial neural nets are loosely
inspired by the way brains work, but most do not follow the behavior of
actual neurons very closely, or use brain-like algorithms for reinforcement.

2) Features or components of a system that attempt to explicitly model the
way some particular feature or component of the human brain carries out a
particular task.  Examples of this category are the "algorithms" of human
vision presented by David Marr.

3) Features or components of a system that attempt to explicitly model the
_tasks performed_ by some feature or component of the human brain, without
necessarily using the same methods.  For example, one feature of the human
brain is that it can solve abstract problems by mapping them into spatial
problems and manipulating them. I think that such an ability would be useful
in an AGI, but need not have the same algorithms/results as the human
system.  This is distinguished from 1) in that we are still looking to
benchmark the artificial system against the natural one.

As a side-note, Novamente does lots of 1), but very little of 2) or 3). 
Eliezer Yukowsky is, as I understand him, advocating systems built using a
combination of 2) and 3).  I would argue that all three are important in
some aspects of AGI design, but perhaps less so in others. However, I'm not
confident enough in my views/knowledge to say that any of these positions
are _required_ to make AGI work, so take it with a grain of salt. ;-)

You are correct in stating that an AGI does not need to have the same
assumptions as the human brain.  However, to some extent the assumptions
will _have_ to be similar, since an AGI will only be useful to the extent
that it can function autonomously in the physical world (even if the AGI
doesn't have a body, many tasks are constrained by the nature of physical
reality) and in the human world (understanding human language, for
instance).  I would like to point out that even relatively "abstract"
domains, such as math and coding, are still anthropocentric: high-level
programming languages for instance have been designed by and for humans, in
line with the particular ways in which humans perceive and interact (though
admittedly very strange humans, in many cases ;->).

One could still argue that you don't have to "think like a human" to
understand the human world, but I am rather skeptical of this, given the
vast space of possible systems for communicating with other minds, ways of
doing mathematics, ways of designing programming languages, etc...

James wrote:
> Solving the general problem of intelligence in the
> theoretical sense doesn't really involve knowing anything about
> wetware. 
But I would argue, in light of the above, that there is no such thing as the
"general problem of intelligence", and that the intelligence or lack thereof
of AGIs is benchmarked based on their performance in the human world.  To
give one more example, I think it's clear that one aspect of intelligence is
the ability recognize the existence of, and to model, other minds.  Humans
are fairly good at modeling other humans, but probably stink at modeling
eight-legged marsupials from Tau Ceti.  In a world with 6 billion humans,
part of the criterion for "how smart" and AGI is will be how well it models
us.

Cheers,
Moshe


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to