Dr. Matthias Heger wrote:

Brad Paulson wrote Fortunately, as I argued above, we do have other
choices.  We don't have to settle for human-like. <<<

I do not see so far other choices. Chess is AI but not AGI.

Yes, I agree but IFF by AGI you mean human-level AGI.  As you point out
below, a lot has to do with how we define AGI.

Your idea of an incremental roadmap to human-level AGI is interesting,
but I think everyone who tries to build a human-level AGI already makes
incremental experiments and first steps with non-human-level AGI in
order to make a proof of concept. I think, Ben Goertzel has done some
experiments with artificial dogs and other non-human agents.

So it is only a matter of definition what we mean by AGI 1.0 I think, we
now have already AGI 0.0.x and the goal is AGI 1.0 which can do the same
as a human.

Why this goal? An AGI which resembles functionally (not necessarily in
algorithmic details) a human has the great advantage that everyone can
communicate with this agent.

Yes, but everyone can communicate with "baby AGI" right now using a highly-restricted subset of human natural language. The system I'm working on now uses the simple, declarative sentence, the propositional (if/then) rule statement, and simple query as its NL interface. The declarations of fact and propositional rules are "upgraded," internally, to FOL+. AI-agent to AI-agent communication is done entirely in FOL+. I had considered using Prolog for the human interface but the non-success of Prolog in a community (computer programmers) already expert at communicating with computers using formal languages caused me to drop back to the, more difficult, but not impossible, semi-formal NL approach.

We don't need to crack the entire NLU problem to be able to communicate with AGI's in a semi-formalized version of natural human language. Sure, it can get tedious. just as talking to a two-year old human child can get tedious (unless it's your kid, of course: then, it's fascinating!). Does it impress people at demos? The "average person?" Yep, it pretty much does. Even though it's far from finished at this time. Skeptical AGHI designers and developers? Not so much. But, I'm working on that!

The question I'm raising in this thread is more one of priorities and allocation of scarce resources. Engineers and scientists comprise only about 1% of the world's population. Is human-level NLU worth the resources it has consumed, and will continue to consume, in the pre-AGI-1.0 stage? Even if we eventually succeed, would it be worth the enormous cost? Wouldn't it be wiser to "go with the strengths" of both humans and computers during this (or any other) stage of AGI development?

Getting digital computers to understand natural human language at human-level has proven itself to be an AI-complete problem. Do we need another fifty years of failure to achieve NLU using computers to finally accept this? Developing NLU for AGI 1.0 is not playing to the strengths of the digital computer or of humans (who only take about three years to gain a basic grasp of language and continue to improve that grasp as they age into adulthood).

Computers calculate better than do humans. Humans are natural language experts. IMHO, saying that the first version of AGI should include enabling computers to understand human language like humans is just about as silly as saying the first version of AGI should include enabling humans to be able to calculate like computers.

IMHO, embodiment is another loosing proposition where AGI 1.0 is concerned. For all we know, embodiment won't work until we can produce an artificial bowel movement. It's the, "To think like Einstein, you have to stink like Einstein." theory. Well, I don't want AGI 1.0 to think like Einstein. I want it to think BETTER than Einstein (and without the odoriferous side-effect, thank you very much).

It would be interesting for me which set of abilities you want to have
in AGI 1.0.

Well, we (humanity) need, first, to decide *why* we want to create another form of intelligence. And the answer has to be something other than "because we can." What benefits do we propose should issue to humanity from such an expensive pursuit? In other words, what does "human-beneficial AGI" really mean?

Only once we have ironed out our differences in that regard (or, at least, have produced a compromise on a list of core abilities), should we start thinking about an implementation. In general, though, when it comes to implementation, we need to start small and play to our strengths.

For example, people who want to build AGHI tend to look down their noses at classic, narrow-AI successes such as expert (production) systems (Ben G. is NOT in this group, BTW). This has prevented these folks from even considering using this technology to achieve AGI 1.0. I *am* (proudly and loudly) using this technology to build "bootstrapping intelligent agents" for AGI.

Here's the theory:

    AGI x.x * narrow-AI(0) * narrow-AI(1) * ...narrow-AI(N) = AGI x.x
    Lather
    Rinse
    Repeat

This is only the "surface equation." Behind each of those 'narrow-AI" entries are tens, hundreds or even (dare we hope?) thousands of individual instances (trained by individual human domain experts). And, of course, the entire process is recursive. Indeed, ideally, it will never end (i.e., transition from AGI 1.0 to AGI 2.0 and so on will be seamless "to infinity and beyond"). To use this AGI, we simply "reach into" the recursive process and "pull out" the current version.

One interesting problem in doing AGI this way is how to distill and resolve the knowledge in each domain (i.e., each instance of narrow-AI in the same domain). Another interesting problem is how to integrate all of the expert human knowledge these systems will amass *across domains* when it comes time to apply that knowledge to a problem in a humanly-beneficial manner. And, this must be done in a non-destructive way so that successive generations of AGI can make use of improved distillation and integration processes). It becomes an "embarrassment of riches" situation in no time at all.

We have sound, doable-now plans for attacking both of these problems but, first, we have to get the individual agents up and "learning" from human experts. Indeed, I don't believe it is speculating too much to say that this approach could lead to finally breaking the AI-complete NLU problem. But, it would be a side-effect, not a prerequisite condition, of building AGI 1.0 incrementally.

Needless to say, this approach requires using the Internet in a very big way. Fast, reliable inter-networking is another "intelligent behavior" at which digital computers are already very good. This capability was not widely-available and accessible in any sort of sophisticated guise until about fifteen years ago. And, again, on the "going with our strengths" philosophy of incremental AGI development, we're using Skype's peer-to-peer chat protocol to enable human-to-agent communication (via a minimal natural language interface -- staying within what is possible today but also expanding so that it can use what will be available tomorrow) and Skype's application-to-application back-channel protocol for (human-free) inter-AGI agent communication.

I'm currently working with Stephen Reed's open source Texai AGI project. I choose to work with Steve because (a) he's a good guy with an even better mind, (b) he's actually building something, and (c) it uses the distributed "expert agent" approach I just mentioned. I don't agree with all aspects of Steve's approach; but, no surprise there: we're not clones.

In a certain sense, I agree with you. The goal of human level AGI is too
ambitious. It seems to me like the wish to go to mars without having
ever built airplanes. There is a lot of room between chess and human
level AI and it is really a big question whether we can ignore this room
and can do the big step to AGI with only one conception stage.

That's interesting. One simile I was thinking about employing in my last e-mail on this thread was that AGI should be approached in a manner similar to how we approached the goal of putting a human on the moon and returning same safely to earth. Great minds think alike? ;-)

Brad



------------------------------------------- agi Archives:
https://www.listbox.com/member/archive/303/=now RSS Feed:
https://www.listbox.com/member/archive/rss/303/ Modify Your
Subscription:
https://www.listbox.com/member/?&;
 Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to