On Tue, Sep 30, 2008 at 2:08 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:

> From: "Ben Goertzel" <[EMAIL PROTECTED]>
> To give a brief answer to one of your questions: analogy is
> mathematically a matter of finding mappings that match certain
> constraints.   The traditional AI approach to this would be to search
> the constrained space of mappings using some search heuristic.  A
> complex systems approach is to embed the constraints into a dynamical
> system and let the dynamical system evolve into a configuration that
> embodies a mapping matching the constraints.  Based on this, it is
> provable that complex systems methods can solve **any** analogy
> problem, given appropriate data, and using for example asymmetric
> Hopfield nets (as described in Amit's book on Attractor Neural
> Networks back in the 80's).  Whether they are the most
> resource-efficient way to solve such problems is another issue.
> OpenCog and the NCE seek to hybridize complex-systems methods with
> probabilistic-logic methods, thus alienating almost everybody ;=>
> -- Ben G
> --------------------------
>
> The problem is that you are still missing what should be the main
> focus of your efforts.  It's not whether or not your program does good
> statistical models, or uses probability nets, or hybrid technology of
> some sort, or that you have solved some mystery to analogy that was
> not yet understood.



I am getting really, really tired of a certain conversational pattern that
often occurs on this list!!

It goes like this...

Person A asks some question about topic T, which is a small
part of the overall AGI problem

Then, I respond to them about topic T

Then, Person B says "You are focusing on the wrong thing,
which shows you don't understand the AGI problem."

But of course, all that I did to bring on that criticism is to
answer someone's question about a specific topic, T ...

Urrggghh...

My response to Tintner's question had nothing to do with the main
focus of my efforts.  It was an attempt to compactly answer
his question ... it may have failed, but that's what it was...



>
>
> An effective program has to be able to learn how to structure its
> interrelated and interactive knowledge effectively according to both
> the meaning of realtively sophisticated linguistic (or linguistic like
> communication) and to its own experience with other less sophisticated
> data experiences (like sensory input of various kinds.)
>

Yes.  Almost everyone working in the field agrees with this.


>
> The most important thing that is missing is the answer to the
> question: how does the program learn about ideological structure?  If
> it weren't for ambiguity (in all of its various forms) then this
> knowledge would be easy for a programmer to acquire through gradual
> experience.  But sophisticated input like language and making sense of
> less sophisticated input, like simple sensory input, is highly
> ambiguous and confusing to the AI programmer.
>
> It is as if you are revving up the engine and trying to show off by
> the roar of your engine, the flames and smoke shooting out the
> exhaust, and the squeals and smoke of your tires burning, but then
> that is all there is to it.  You will just be spinning your wheels
> until you deal with the problem of ideological structure in the
> complexity of highly ambiguous content.
>
> So far, it seems like very few people have any idea what I am talking
> about, because they almost never mention the problem as I see it.
> Very few people have actually responded intelligibly to this kind of
> criticism, and for those who do, their answer is usually limited to
> explaining that this is what we are all trying to get at, or that this
> was done in the old days, and then dropping it.  So I will understand
> if you don't reply to this.



On the contrary, I strongly suspect
nearly everyone working in the AGI field thoroughly
understands the problem you are talking about, although they may
not use your chosen terminology ("ideological structure" is a weird
phrase in this context).

But I don't quite understand your use of verbiage in the phrase

"
ideological structure in the
complexity of highly ambiguous content.
"

What is is that you really mean here?  Just that an AGI has to
pragmatically understand
the relationships between concepts, as implied by ambiguous, complex
uses of language and as related to the relevance of concepts to the
nonlinguistic world?

I believe that OpenCogPrime will be able to do this, but I don't have
a one-paragraph explanation of how.  A complex task requires a complex
solution.  My proposed solution is documented online.

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to