Dr. Matthias Heger wrote:
Brad Pausen wrote The question I'm raising in this thread is more one of
priorities and allocation of scarce resources. Engineers and scientists
comprise only about 1% of the world's population. Is human-level NLU
worth the resources it has consumed, and will continue to consume, in
the pre-AGI-1.0 stage? Even if we eventually succeed, would it be worth
the enormous cost? Wouldn't it be wiser to "go with the strengths" of
both humans and computers during this (or any other) stage of AGI
development? <<<
I think it is not so important what abilities our first AGIs will have.
Human language would be a nice feature but it is not necessary.
Agreed. And nothing in the above quote indicates otherwise. I'm only
arguing that we should not spend scarce resources now (or ever, really)
trying to implement unnecessary features. Both human-level NLU and
human-like embodiment are, in my considered opinion, unnecessary for AGI 1.0.
It is more important how it works. We want to develop an intelligent
software which has the potential to solve very different problems in
different domains. This is the main idea of AGI.
Imagine someone thinks he has build an AGI. How can he convince the
community that it is in fact AGI and not AI? If he shows some
applications where his AGI works then this is an indication for the G in
his AGI but it is no proof at all.
I agree.
Even a turing test would be no good
test because given n questions for the AGI I can never be sure whether
it can pass the test for further n questions.
Ah, I see you've met my friend Mr. David Hume.
AGI is inherently a white
box problem not a black box problem.
>
A chess playing computer is for many people a stunning machine. But we
know HOW it works and only(!) because we know the HOW we can evaluate
the potential of this approach for general AGI.
Brad, for this reason I think your question about whether the first AGI
should have the ability for human language or not is not so important.
If you can create a software which has the ability to solve very
different problems in very different domains than you have solved the
main problem of AGI.
Actually, I disagree with you here. There is really no need to create a
single AGI that can solve problems in multiple domains. Most humans can't
do that. We can, more easily, I believe, coordinate a network of AGI
agents that are, each, experts in a single domain. These experts would be
trained by human experts (as well as be able to learn from experience) and
would be able to exchange information across domains as needed (need being
determined, perhaps, by an "expert supervisor" AGI agent). None of these
agents, alone, would qualify as AGI (because they are narrow-domain
experts). The system in which these AGI experts function would, however,
constitute true AGI.
Your reply makes it sound like I have a "question" about whether the first
AGI should have human language ability. I have no question about this.
What I have is an informed opinion. It is this: requiring solution of an
AI-complete problem (human-level NLU) is the kiss of death for the success
of any AGI concept. If we let go of this strategy and concentrate on
making non-human-like intelligences (using already-proven AI strategies
that do not rely on NLU or embodiment and that leverage the strengths of
the only non-human intelligence we have at present), I believe we will get
to much more powerful AGI much sooner.
My concept of AGI holds that creating many different domain experts using
proven, narrow-AI technology and, then, coordinating a vast network of
these domain experts to identify/solve complex, cross-domain problems (in
real-time and concurrently if necessary) will, in fact, result in a system
that has a problem-solving capability greater than any single human being.
Without requiring human-level NLU or embodiment. It will be more robust
(massive redundancy, such as that found in biologically-evolved systems is
the key here) than any human being, be quicker than any human being and be
more accurate than any human being (or, especially, organization of human
beings -- have you ever tried to get an error in you HMO medical records
corrected?).
For example, in the (near, I hope) future when you feel sick, you will sit
down at your computer and call up a medical practitioner (GP) AGI agent (it
doesn't really matter from where, but assume from the Internet). This will
be the same GP AGI agent anyone else anywhere in the world would call up
(except, of course, each human is invoking a, localized, instance of the GP
AGI agent). Note that you're ahead of the game already. You didn't have
to wait two weeks to get an appointment (at 7AM in the morning). You
didn't have to go to a remote location (the doctor's office or clinic). The
visit to your doctor is already much less stressful, a medical benefit in
and of itself. Once the GP AGI responds, you will only have to relate your
symptoms once. The GP AGI agent will evaluate your symptoms and, if a
consultation is needed, call in any required specialist AGI doctor agents.
The AGI agents will confer and decide on a course of treatment. The GP
agent will, then, write prescriptions or schedule tests, or perhaps even,
schedule surgery in accordance with that treatment plan.
[NOTE: I'm leaving a lot of stuff out here, such as the need to have a
human doctor "lay hands" on the patient to get sufficient information to
make a proper diagnosis, and other issues of a practical nature. I just
want you to know I know I'm leaving that stuff out. It's not essential to
the point I'm trying to make.]
Is any, individual, AGI agent in the above scenario "smarter" than its
human domain expert? Not necessarily. Just as smart will do. However, in
many documented instances, classical AI production systems were, in fact,
"smarter" than the human experts who trained them (if only because the AI
never forgot its data or rules, never got distracted, had a headache,
didn't get enough sleep the night before, etc.). The real AGI stuff is in
how these agents are able to determine which helper agents to call upon and
in their ability to exchange information with those helper agents quickly
and accurately. To you, the human, it will seem as though you have visited
the best doctor on the planet. In fact, you have. Just not a human
doctor. And, very likely, not just one non-human doctor.
This is what I mean when I say, human-beneficial AGI. No NLU or embodiment
required whatsoever.
While this type of AGI may evince some human characteristics when being
evaluated by humans, it will not be as the result of a deliberate attempt
to simulate deep human behavior.
Brad
Of course it is important to show what the AGI can do with some
examples. But for an evaluation of its potential to be a real AGI it is
more important how it works.
------------------------------------------- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com