Hi Brad,

An interesting point of conceptual agreement between OCP and Texai designs
is that very specifically engineered bootstrapping processes are necessary
to push into AGI territory. Attempting to summarize using my limited
knowledge, Texai hopes to achieve that boostrapping via reasoning over
commonsense knowledge which has been acquired via a combination of
expert-system data entry and unsupervised learning. OCP hopes to achieve
that boostrapping via a combination of embodied interactive learning and
reasoning supplemented with narrow-AI NL components (wordnet, RelEx semantic
comprehension, RelEx NLgen, etc.). Of course, each project has their own
reasons for believing that their approach is the most tractable and the
least likely to become stuck in the AI-rabbitholes of the past.

I believe that surface comparisons of most modern AGI-oriented designs
cannot be used to make 'likelihood to proceed faster than others'
predictions with sufficient confidence to weave convincing arguments over an
email medium. So, making assertions about a design being 'better, faster,
cheaper, less risky, etc.' are okay, if those assertions are clearly
opinions (being backed up in writing is good, but that generally requires
paper and book length treatment) and agreements to disagree are arrived at
readily (without resorting to digressions about straw men to undermine
others positions). The goal of this structure for this aspect of list
discussion is to create an atmosphere where everyone can learn as much as
possible about competing AGI designs. I think we're all saying effectively
the same thing here, so we should be able to agree to agree on this point.

IMO, it's more productive to highlight the reasons why your [insert AGI
design here] system might work, rather than obsessing on the flaws of other
designs. E.g, it's really not useful to repeatedly press the fact that past
[grossly insufficient] attempts at NLU and embodiment have been abject
failures, since *ALL* past attempts at AGI have fallen short of the mark,
including knowledge-based expert-system with reasoning-bolted-on approaches.
Furthermore, if all of science and engineering used the conservative logic
that "past performance [...] is really the only thing you have to go on",
then we'd still be stuck with Victorian-level science and technology, since
all of the great leaps where past performance WASN'T the best indicator
would have been missed.

On to a positive argument for the OCP design, the simple explanation for why
"embodiment in various forms has, so far, failed to provide any real help in
cracking the NLU problem,"  is that all past attempts at embodiment have
been incredibly crude and grossly insufficient. The technologies that might
allow for fine realtime motor control and perception (including
proprioception, or even hacks like good inverse kinematics, and other
subtleties) in real or virtual settings have simply not yet been
sufficiently developed. Any roboticist or virtual world programmer can
confirm this assertion. One aspect of OCP development focuses on this issue
and is working with the realXtend developers to enhance OpenSim to provide
sufficient functionality to enable ever more sophisticated
perception-action-reasoning loops (we'd also like to work with robot
simulation and control software at some later stage); this work will likely
be written up in a paper sometime next year.

-dave

On Sat, Oct 11, 2008 at 9:52 PM, Brad Paulsen <[EMAIL PROTECTED]>wrote:

> Dave,
>
> Well, I thought I'd described "how" pretty well.  Even why.  See my recent
> conversation with Dr. Heger on this list.  I'll be happy to answer specific
> questions based on those explanations but I'm not going to repeat them here.
>  Simply haven't got the time.
>
> Although I have not been asked to do so, I do feel I need to provide an ex
> post facto disclaimer.  Here goes:
>
> I am aware of the approach being taken by Stephen Reed in the Texai
> project.  I am currently associated with that project as a volunteer.  What
> I have said previously in this regard) is, however, my own interpretation
> and opinion insofar as what I have said concerned tactics or strategies that
> may be similar to those being implemented in the Texai project.  I'm pretty
> sure my interpretations and opinions are highly compatible with Steve's
> views even though they may not agree in every detail.  My comments should
> NOT, however, be taken as an "official" representation of the Texai
> project's tactics, strategies or goals.  End disclaimer.
>
> I was asked by Dr. Heger to go into some of the specifics of the strategy I
> had in mind.  I honored his request and wrote quite extensively (for a list
> posting -- sorry 'bout that) about that strategy.  I have not argued, nor do
> I intend to argue, that I have an approach to AGI that is better, faster or
> more economical than "approach X."  Instead, I have simply pointed out that
> NLU and embodiment problems have proven themselves to be extremely difficult
> (indeed, intractable to date).  I, therefore, on those grounds alone,
> believe (and it's just an OPINION, although I believe a well-reasoned one)
> that we will get to a human-beneficial AGI sooner (and, I guess, probably,
> therefore, cheaper) if we side-step those two proven productivity sinks.
>  For now.  End of argument.
>
> I'm not trying to "sell" my AGI strategy or agenda to you or anyone else.
> Like many people on this list who have an opinion on these matters, I have a
> background as a practitioner in AI that goes back over twenty years. I've
> designed and written narrow-AI production ("expert") system engines and been
> involved in knowledge engineering using those engines.  The results of my
> efforts have saved large corporations millions of dollars (if not billions,
> over time).  I can assure you that most of the humans who saw these systems
> come to life and out-perform their own human experts, were pretty sure I'd
> succeeded in getting a human into the box.  To them, it was already AGI.
>  I'd gotten a computer to do something only a human being (their employee)
> had theretofore been able to do.  And I got the computer to do it BETTER and
> FASTER.  Of course, these were mostly non-technical people who didn't
> understand the technology (in many cases had never even heard of it) and so,
> to them, there was a bit of "magic" involved.  We, here, of course know that
> was not the case.  While the stuff I built back in the 1980's and 1990's may
> not have been snazzy, wiz-bang AI with conversational robots and the whole
> Sci-Fi thing, it was still damn impressive and extremely human-beneficial.
>  No NLU.  No embodiment.
>
> I don't claim to have a better way to get to AGI, just a less risky way.
> Based on past experience (in the field).  I have never intended to criticize
> any particular AGI approach.  I have not tried to show that my approach is
> conceptually superior to any other approach on any specific design point.
>  Indeed, I firmly believe that a multitude of vastly different approaches to
> this problem is a "good thing."  At least initially.
>
> As far as OCP's approach to embodiment is concerned, again it's neither the
> specifics nor the novelty of any particular approach that concerns me.  The
> efficacy of any approach to the embodiment problem can only be determined
> once it has been tried.  I'm only pointing out something everybody here
> knows full well: embodiment in various forms has, so far, failed to provide
> any real help in cracking the NLU problem.  Might it in the future?  Sure.
>  But the key word there is "might."  When you go to the track to bet on a
> horse, do you look for the nag that's come in last or nearly last in every
> previous race that season and say to yourself, "Hey, I have a novel betting
> strategy and, regardless what history shows (and the odds-makers say), I
> think I can make a killing here by betting the farm on that consistent
> loser!"  Probably not.  Why?  Because past performance, while not a
> guarantee of future performance, is really the only thing you have to go on,
> isn't it?
>
> Cheers,
> Brad
>
> P.S.  Back in the early 1970's I once paid for a weekend of debauchery in
> Chicago from the proceeds of my $10 bet on a 20-to-1 horse at Arlington Park
> race track because I liked the name, "She's a Dazzler."  So it can happen.
>  The only question is: How much do you want to bet? ;-)
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to