Jim,

I have now read your summary of your AGI thinking

My reaction is that these are fairly commonsensical cognitive-science
notions.   There's not much to argue with here, nor do you give much meat
regarding how any of the functions or structures you describe might
actually be achieved in a computational (or other) system....   You also
seem to leave lots of things out, focusing mainly on the
declarative/conceptual level rather than other aspects of intelligence...

Basically -- and being quite frank -- I don't see any AGI design in that
summary, just some relatively commonplace (though mainly all sensible)
cog-sci notions...

-- Ben G

On Sat, Apr 20, 2013 at 12:16 PM, Jim Bromer <[email protected]> wrote:

> I am not sure what Stan meant by a lack of depth but I assume that he was
> talking about definitions of the terms that he linked on.  So, for
> example, I should have included a deeper definition of the term
> "understanding"?  No.  I was saying that to understand one idea you need to
> understand many related ideas.  And that recognition requires some kind of
> imaginative projection of previously acquired insight. If you get it then
> enough said.  You already have the many related ideas that you need to
> understand the concept.
>
> I'm sorry I just do not see the foundations of the other
> criticisms.  There is no question that the explanation of an actual
> experiment and the honest reporting of the results would be more inspiring
> if something promising was achieved, but the usual academic style paper
> does not meet that standard of achievement.  The effort to award yourself
> for belonging to a group who have mastered the style of the academic paper
> but who have not actually contributed anything substantial through their
> papers is nothing to be proud of.  And that is why most of the criticisms
> that I received were criticisms of style and of empty blanket dismissals
> that found nothing in my paper actually worth criticizing.  If you had made
> a little effort you might have actually contributed something.  Stan at
> least created a curiosity of deconstruction.
>
> I thought I got an interesting challenge about the limitations of text
> based AGI but it turned out to be part of an argument that computer
> programs could not make inferences!
>
> And the criticism that my program would not be fast enough may be correct
> but it was the first thing I said in my summary.  That was what I was
> alluding to when I pointed out that complexity is a major problem.
>
> There was not one good criticism of my summary.  None of you actually
> seemed to understand what I was saying.  I find that hard to believe but
> the empty criticisms leave me with that conclusion.  So even though I was
> perturbed by the insipid pettiness of some of the criticisms, there is no
> question in my mind that they represent the rejection by an audience
> who were truly unable to understand what I was talking about.
>
> The only question is whether I can turn these ideas into some kind of
> working model.
> Jim Bromer
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to