And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal with the  occasional list-moderation issues that
arise, is not supposed to give my **AI opinions** primacy over anybody
else's on the list, in discussions....   I only intervene as moderator when
discussions go off-topic, not to try to push my perspective on people ...
and on the rare occasions when I am speaking as list owner/moderator rather
than as "just another AI guy with his own opinions", I try to be very clear
that that is the role I'm adopting..

ben g

On Sat, Oct 11, 2008 at 11:37 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> Brad,
>
> Sorry if my response was somehow harsh or inappropriate, it really wasn't
> intended as such.  Your contributions to the list are valued.  These last
> few weeks have been rather tough for me in my entrepreneurial role (it's not
> the best time to be operating a small business, which is what Novamente LLC
> is) so I may be in a crankier mood than usual for that reason.
>
> I've been considering taking a break from this email list myself for a few
> weeks or months, not because I don't enjoy the discussions, but because
> they're taking so much of my time lately!
>
> I guess the essence of my response to you was
>
> ***
> What I don't see in your counterproposal is any kind of grounding of your
> ideas in a theory of mind.  That is: why should I believe that loosely
> coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
> lead to an AGI capable of adapting to fundamentally new situations not
> envisioned by any of its programmers?   I'm not completely ruling out the
> possiblity that this kind of strategy could work, but where's the beef?  I'm
> not asking for a proof, I'm asking for a coherent, detailed argument as to
> why this kind of approach could lead to a generally-intelligent mind.
> ***
>
> and I don't really see what is offensive about that, but maybe my judgment
> is "off" this week...
>
>
> -- Ben G
>
>
>
> On Sat, Oct 11, 2008 at 11:32 AM, Brad Paulsen <[EMAIL PROTECTED]>wrote:
>
>> Ben,
>>
>> Well, I guess you told me!  I'll just be taking my loosely-coupled
>> "...bunch of clever narrow-AI widgets..." right on out of here.  No need to
>> worry about me venturing an opinion here ever again.  I have neither the
>> energy nor, apparently, the intellectual ability to respond to a broadside
>> like that from the "top dog."
>>
>> It's too bad.  I was just starting to fell "at home" here.  Sigh.
>>
>> Cheers (and goodbye),
>> Brad
>>
>> Ben Goertzel wrote:
>>
>>>
>>> A few points...
>>>
>>> 1) Closely associating embodiment with GOFAI is just flat-out
>>> historically wrong.  GOFAI refers to a specific class of approaches to AI
>>> that wer pursued a few decades ago, which were not centered on embodiment as
>>> a key concept or aspect.
>>> 2)
>>> Embodiment based approaches to AGI certainly have not been extensively
>>> tried and failed in any serious way, simply because of the primitive nature
>>> of real and virtual robotic technology.  Even right now, the real and
>>> virtual robotics tech are not *quite* there to enable us to pursue
>>> embodiment-based AGI in a really tractable way.  For instance, humanoid
>>> robots like the Nao cost $20K and have all sorts of serious actuator
>>> problems ... and virtual world tech is not built to allow fine-grained AI
>>> control of agent skeletons ... etc.   It would be more accurate to say that
>>> we're 5-15 years away from a condition where embodiment-based AGI can be
>>> tried-out without immense time-wastage on making not-quite-ready supporting
>>> technologies work....
>>>
>>> 3)
>>> I do not think that humanlike NL understanding nor humanlike embodiment
>>> are in any way necessary for AGI.   I just think that they seem to represent
>>> the shortest path to getting there, because they represent a path that **we
>>> understand reasonably well** ... and because AGIs following this path will
>>> be able to **learn from us** reasonably easily, as opposed to AGIs built on
>>> fundamentally nonhuman principles
>>>
>>> To put it simply, once an AGI can understand human language we can teach
>>> it stuff.  This will be very helpful to it.  We have a lot of experience in
>>> teaching agents with humanlike bodies, communicating using human language.
>>>  Then it can teach us stuff too.   And human language is just riddled
>>> through and through with metaphors to embodiment, suggesting that solving
>>> the disambiguation problems in linguistics will be much easier for a system
>>> with vaguely humanlike embodied experience.
>>>
>>> 4)
>>> I have articulated a detailed proposal for how to make an AGI using the
>>> OCP design together with linguistic communication and virtual embodiment.
>>>  Rather than just a promising-looking assemblage of in-development
>>> technologies, the proposal is grounded in a coherent holistic theory of how
>>> minds work.
>>>
>>> What I don't see in your counterproposal is any kind of grounding of your
>>> ideas in a theory of mind.  That is: why should I believe that loosely
>>> coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
>>> lead to an AGI capable of adapting to fundamentally new situations not
>>> envisioned by any of its programmers?   I'm not completely ruling out the
>>> possiblity that this kind of strategy could work, but where's the beef?  I'm
>>> not asking for a proof, I'm asking for a coherent, detailed argument as to
>>> why this kind of approach could lead to a generally-intelligent mind.
>>>
>>> 5)
>>> It sometimes feels to me like the reason so little progress is made
>>> toward AGI is that the 2000 people on the planet who are passionate about
>>> it, are moving in 4000 different directions ;-) ...
>>>
>>> OpenCog is an attempt to get a substantial number of AGI enthusiasts all
>>> moving in the same direction, without claiming this is the **only** possible
>>> workable direction.
>>> Eventually, supporting technologies will advance enough that some smart
>>> guy can build an AGI on his own in a year of hacking.  I don't think we're
>>> at that stage yet -- but I think we're at the stage where a team of a couple
>>> dozen could do it in 5-10 years.  However, if that level of effort can't be
>>> systematically summoned (thru gov't grants, industry funding, open-source
>>> volunteerism or wherever) then maybe AGI won't come about till the
>>> supporting technologies develop further.  My hope is that we can overcome
>>> the existing collective-psychology and practical-economic obstacles that
>>> hold us back from creating AGI together, and build a beneficial AGI ASAP ...
>>>
>>> -- Ben G
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Oct 6, 2008 at 2:34 AM, David Hart <[EMAIL PROTECTED] <mailto:
>>> [EMAIL PROTECTED]>> wrote:
>>>
>>>    On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen <[EMAIL PROTECTED]
>>>    <mailto:[EMAIL PROTECTED]>> wrote:
>>>
>>>        So, it has, in fact, been tried before.  It has, in fact, always
>>>        failed. Your comments about the quality of Ben's approach are
>>>        noted.  Maybe you're right.  But, it's not germane to my
>>>        argument which is that those parts of Ben G.'s approach that
>>>        call for human-level NLU, and that propose embodiment (or
>>>        virtual embodiment) as a way to achieve human-level NLU, have
>>>        been tried before, many times, and have always failed.  If Ben
>>>        G. knows something he's not telling us then, when he does, I'll
>>>        consider modifying my views.  But, remember, my comments were
>>>        never directed at the OpenCog project or Ben G. personally.
>>>         They were directed at an AGI *strategy* not invented by Ben G.
>>>        or OpenCog.
>>>
>>>
>>>    The OCP approach/strategy, both in crucial specifics of its parts
>>>    and particularly in its total synthesis, *IS* novel; I recommend a
>>>    closer re-examination!
>>>
>>>    The mere resemblance of some of its parts to past [failed] AI
>>>    undertakings is not enough reason to dismiss those parts, IMHO,
>>>    dislike of embodiment or NLU or any other aspect that has a GOFAI
>>>    past lurking in the wings not withstanding.
>>>
>>>    OTOH, I will happily agree to disagree on these points to save the
>>>    AGI list from going down in flames! ;-)
>>>
>>>    -dave
>>>
>>>  ------------------------------------------------------------------------
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>    <https://www.listbox.com/member/archive/rss/303/> | Modify
>>>    <https://www.listbox.com/member/?&;> Your Subscription       [Powered
>>> by
>>>    Listbox] <http://www.listbox.com>
>>>
>>>
>>>
>>>
>>> --
>>> Ben Goertzel, PhD
>>> CEO, Novamente LLC and Biomind LLC
>>> Director of Research, SIAI
>>> [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
>>>
>>> "Nothing will ever be attempted if all possible objections must be first
>>> overcome "  - Dr Samuel Johnson
>>>
>>>
>>> ------------------------------------------------------------------------
>>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> <
>>> https://www.listbox.com/member/archive/rss/303/> | Modify <
>>> https://www.listbox.com/member/?&;> Your Subscription       [Powered by
>>> Listbox] <http://www.listbox.com>
>>>
>>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome "  - Dr Samuel Johnson
>
>
>


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to