Dave,
Well, I thought I'd described "how" pretty well. Even why. See my recent
conversation with Dr. Heger on this list. I'll be happy to answer specific
questions based on those explanations but I'm not going to repeat them
here. Simply haven't got the time.
Although I have not been asked to do so, I do feel I need to provide an ex
post facto disclaimer. Here goes:
I am aware of the approach being taken by Stephen Reed in the Texai
project. I am currently associated with that project as a volunteer. What
I have said previously in this regard) is, however, my own interpretation
and opinion insofar as what I have said concerned tactics or strategies
that may be similar to those being implemented in the Texai project. I'm
pretty sure my interpretations and opinions are highly compatible with
Steve's views even though they may not agree in every detail. My comments
should NOT, however, be taken as an "official" representation of the Texai
project's tactics, strategies or goals. End disclaimer.
I was asked by Dr. Heger to go into some of the specifics of the strategy I
had in mind. I honored his request and wrote quite extensively (for a list
posting -- sorry 'bout that) about that strategy. I have not argued, nor
do I intend to argue, that I have an approach to AGI that is better, faster
or more economical than "approach X." Instead, I have simply pointed out
that NLU and embodiment problems have proven themselves to be extremely
difficult (indeed, intractable to date). I, therefore, on those grounds
alone, believe (and it's just an OPINION, although I believe a
well-reasoned one) that we will get to a human-beneficial AGI sooner (and,
I guess, probably, therefore, cheaper) if we side-step those two proven
productivity sinks. For now. End of argument.
I'm not trying to "sell" my AGI strategy or agenda to you or anyone else.
Like many people on this list who have an opinion on these matters, I have
a background as a practitioner in AI that goes back over twenty years.
I've designed and written narrow-AI production ("expert") system engines
and been involved in knowledge engineering using those engines. The
results of my efforts have saved large corporations millions of dollars (if
not billions, over time). I can assure you that most of the humans who saw
these systems come to life and out-perform their own human experts, were
pretty sure I'd succeeded in getting a human into the box. To them, it was
already AGI. I'd gotten a computer to do something only a human being
(their employee) had theretofore been able to do. And I got the computer
to do it BETTER and FASTER. Of course, these were mostly non-technical
people who didn't understand the technology (in many cases had never even
heard of it) and so, to them, there was a bit of "magic" involved. We,
here, of course know that was not the case. While the stuff I built back
in the 1980's and 1990's may not have been snazzy, wiz-bang AI with
conversational robots and the whole Sci-Fi thing, it was still damn
impressive and extremely human-beneficial. No NLU. No embodiment.
I don't claim to have a better way to get to AGI, just a less risky way.
Based on past experience (in the field). I have never intended to
criticize any particular AGI approach. I have not tried to show that my
approach is conceptually superior to any other approach on any specific
design point. Indeed, I firmly believe that a multitude of vastly
different approaches to this problem is a "good thing." At least initially.
As far as OCP's approach to embodiment is concerned, again it's neither the
specifics nor the novelty of any particular approach that concerns me. The
efficacy of any approach to the embodiment problem can only be determined
once it has been tried. I'm only pointing out something everybody here
knows full well: embodiment in various forms has, so far, failed to provide
any real help in cracking the NLU problem. Might it in the future? Sure.
But the key word there is "might." When you go to the track to bet on a
horse, do you look for the nag that's come in last or nearly last in every
previous race that season and say to yourself, "Hey, I have a novel betting
strategy and, regardless what history shows (and the odds-makers say), I
think I can make a killing here by betting the farm on that consistent
loser!" Probably not. Why? Because past performance, while not a
guarantee of future performance, is really the only thing you have to go
on, isn't it?
Cheers,
Brad
P.S. Back in the early 1970's I once paid for a weekend of debauchery in
Chicago from the proceeds of my $10 bet on a 20-to-1 horse at Arlington
Park race track because I liked the name, "She's a Dazzler." So it can
happen. The only question is: How much do you want to bet? ;-)
David Hart wrote:
Brad,
Your post describes your position *very* well, thanks.
But, it does not describe *how* or *why* your AI system might achieve
domain expertise any faster/better/cheaper than other narrow-AI systems
(NLU capable, embodied, or otherwise) on its way to achieving
networked-AGI. The list would certainly benefit from any such exposition!
On a smaller point of clarification, the OCP 'embodied' design will not
attempt to "simulate deep human behavior", but rather kluge "good
enough" humanesque and non-humanesque embodiment to provide *grounding*
for "good enough" solutions in a wide variety of situations (sub-adult
performance in some situaitons and better-than-genius performance in
others) including NLU, types of science that require massive information
synthesis and creative leaps in thinking (inlcuding in non-everday-human
contexts such as nanoscopic quantum scales or macroscopic relativistic
scales), plus other interesting areas such as industry, economics,
public policy, arts, etc.
-dave
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> | Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com