Dr. Matthias Heger wrote:
Brad Paulson wrote More generally, as long as AGI designers and
developers insist on simulating human intelligence, they will have to
deal with the AI-complete problem of natural language understanding.
Looking for new approaches to this problem, many researches (including
prominent members of this list) have turned to "embodiment" (or "virtual
embodiment") for help. <<<

We only know one human level intelligence which works. And this works
with embodiment. So for this reason, it seems to be an useful approach.

Dr. Heger,

First, I don't subscribe to the belief that AGI 1.0 need be human-level. In fact, my belief is just the opposite: I don't think it should be human-level. And, with all due respect sir, while we may know that human-level intelligence works, we have no idea (or very little idea) *how* it works. That, to me, seems to be the more important issue.

If we did have a better idea of how human-level intelligence worked, we'd probably have built a human-like AGI by now. Instead, for all we know, human intelligence (and not just the absence or presence or degree thereof in any individual human) may be at the bottom end of the scale in the universe of all possible intelligences.

You are also, again with all due respect, incorrect in saying that we have no other intelligence with which to work. We have the digital computer. It can beat expert humans at the game of chess. It can beat any human at arithmetic -- both in speed and accuracy. Unlike humans, it remembers anything ever stored in its memory and can recall anything in its memory with 100% accuracy. It never shows up to work tired or hung over. It never calls in sick. On the other hand, what a digital computer doesn't do well at present, things like understanding human natural language and being creative (in a non-random way), humans do very well.

So, why are we so hell-bent on building an AGI in our own image? It just doesn't make sense when it is manifestly clear that we know how to do better. Why aren't we designing and developing an AGI that leverages the strengths, rather than attempts to overcome the weaknesses, of both forms of intelligence?

For many tasks that would be deemed intelligent if Turing's imitation game had not required natural HUMAN language understanding (or the equivalent mimicking thereof), we have already created a non-human intelligence superior to human-level intelligence. It "thinks" nothing like we do (base-2 vs. base-10) yet, for many feats of intelligence only humans used to be able to perform, it is a far superior intelligence. And, please note, not only is human-like embodiment *not* required by this intelligence, it would be (as it is to the human chess player) a HINDRANCE.

But, of course, if we always use the humans as a guide to develop AGI
then we will probably obtain similar limitations we observe in humans.

I actually don't have a problem with using human-level intelligence as an *inspiration* for AGI 1.0. Digital computers were certainly inspired by human-level intelligence. I do, however, have a problem with using human-level intelligence as a *destination* for AGI 1.0.

I think an AGI which should be useful for us, must be a very good
scientist, physicist and mathematician. Is the human kind of learning by
experience and the human kind of intelligence good for this job? I don't
think so.

Most people on this planet are very poor in these disciplines and I
don't think that this is only a question of education. There seems to be
a very subtle fine tuning of genes necessary to change the level of
intelligence from a monkey to the average human. And there is an even
more subtle fine tuning necessary to obtain a good mathematician.


One must be careful with arguments from genetics. The average chimp will beat any human for lunch in a short-term memory contest. I don't care how good the human contestant is at mathematics. Since judgments about intelligence are always relative to the environment in which it is evinced, in an environment where those with good short-term memory skills thrive and those without barely survive, chimps sure look like the higher intelligence.

This is discouraging for the development of AGI because it shows that
human level intelligence is not only a question of the right
architecture but it seems to be more a question of the right fine tuning
of some parameters. Even if we know that we have the right software
architecture, then the real hard problems would still arise.


Perhaps. But your first sentence should have read, "This is discouraging for the development of HUMAN-LEVEL AGI because...". It doesn't really matter to a non-human AGI.

We know that humans can swim. But who would create a swimming machine by
following the example of the human anatomy?


Yes. Just as we didn't design airplanes to fly "bird-like," even though the bird was our best source of inspiration for developing non-bird-like flight. Airplanes fly not at all like birds, but, for some very human-beneficial applications, they fly better than birds.

Similarly, we know that some humans can be scientists. But is it real
the best way to follow the example of humans to create an artificial
scientists? Probably not. If you have the goal to create an artificial
scientist in nanotechnology, is it a good strategy to let this
artificial agent walk through an artificial garden with trees and clouds
and so on? Is this the best way to make progress in nanotechnology,
economy and so on? Probably not.

But if we have no idea how to do it better, we have no other chance than
to follow the example of human intelligence.


Fortunately, as I argued above, we do have other choices. We don't have to settle for human-like.

Cheers,
Brad


------------------------------------------- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription:
https://www.listbox.com/member/?&;
 Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to