Mike Tintner wrote in the message archived at 
http://www.mail-archive.com/[email protected]/msg09744.html 

> [...]
> The first thing is that you need a definition 
> of the problem, and therefore a test of AGI. 
> And there is nothing even agreed about that - 
> although I think most people know what is required. 
> This was evident in Richard's recent response to 
> ATMurray's recent declaring of his "Agi" system. 
> Richard clearly knew pretty well why that system 
> failed the AGI "test" but he didn't have an explicit 
> definition of the test at his fingertips.

Richard Loosemore "clearly knew pretty well" nothing
of the sort. His was a lazy man's response. He did not 
download and experiment with the MindForth program at
http://mentifex.virtualentity.com/mind4th.html and
http://mind.sourceforge.net/mind4th.html -- he only
made a few generalizations about what he lazily
_thought_ MindForth might be doing. In the archive
http://www.mail-archive.com/[email protected]/msg09674.html
Richard Loosemore vaguely compares sophisticated
MindForth with the canned-reponse "Eliza" program --
which nobody ever claimed was an artificial intelligence.

Richard Loosemore furthermore suggested that all of 
the cognitive processes in the Eysenck & Keane textbook
of Cognitive Psychology would have to be implemented
in MindForth before it could be said to have achieved
True AI functionality. That demand is like telling
Wilbur and Orville Wright that they have to demo
a transatlantic French Concorde jet before they may 
claim to have achieved "true airplane functionality."

Sorry, Richard, but the AI breakthrough functionality
is, plain and simple, the ability to think -- to activate
an associative string of concepts and to express the 
thinking in the generative grammar of Chomsky.

There is no requirement that people be other than
lazy, smug and self-satisfied on this AGI list.
I felt that I should announce the end of the
decade-long process of debugging MindForth AI.

Now the controversy has spilled over to 
http://onsingularity.com/item/3175 
and the dust has not yet settled.

Richard is beginning to act like ESY!
>
> The test, I suggest, is essentially; not the Turing 
> Test or anything like that but "The General Test." 
> If your system is an AGI, or has AGI potential, 
> then it must first of all have a skill and be 
> able to solve problems in a given doman. [...]

The skill of MindForth is spreading activation -- 
from concept to concept -- under the direction of 
a Chomksyan linguistic superstructure.

Now I would like to digress and draw Ben Goertzel's
math-minded attention to my latest "creative idea" at
http://mind.sourceforge.net/computationalization.html#syllogism 
where on 30 January 2008 I thought up and loaded-up:

It may be possible to endow an AI mind with the ability 
to think in syllogisms by creating super-concepts or 
set-concepts above and beyond, and yet in parallel with, 
the ordinary concepts. Certain words like "all" or "never" 
may be coded to duplicate a governed concept and to endow 
the duplicate with only one factual or asserted attribute, 
namely the special relationship modified by the "all" or 
"never" assertion. Take, for instance, the following. 

        All fish have tails. 
        Tuna are fish. 
        Tuna have tails. 

When the AI mind encounters an "all" proposition involving 
the verb "have" and the direct object "tails", a new, 
supervenient concept of "fish-as-set" is created to hold 
only one class of associative nodes -- the simultaneous 
association to "have" and to the "tail" concept. 

Whenever the basic "fish" concept is activated, the 
fish-as-set concept is also activated, ready to "pounce," 
as it were, with the supervenient assertion that all 
fish have tails. Thenceforth, when any animal is identified 
as being a fish by some kind of "isA" tag, the "fish-as-set" 
concept is also activated and the AI mind superveniently 
knows that the animal in question has a tail. The machine 
reasoning could go somewhat like the following dialog. 

        Do tuna have tails? 
        Are tuna plants? 
        Tuna are animals. 
        What kind of animals? 
        Tuna are fish. 
        All fish have tails. 
        Tuna have tails. 

The ideas above conform with set theory and with the 
notion of neuronal prodigality -- that there need be 
no concern about wasting neuronal resources -- and with 
the idea of "inheritance" in object-oriented programming (OOP). 

Whereas normally a new fiber might be attached to the 
fiber-gang of a redundantly entertained concept, it is 
just as easy to engender a "concept-as-set" fiber in 
parallel with the original, basic concept. For some 
basic concepts, there might be multiple concept-as-set 
structures reperesenting multiple "all" or "never" ideas 
believed to be the truth about the basic, ordinary concept. 

The AI mind thinking about an ordinary concept in the 
course of problem-solving, does not have to formally engage 
in the obvious syllogism that can be drawn from the given 
situation, but may simply think along a pathway from "isA" 
fish to "has a tail," because the supervenient set-concept 
automatically guides the line of reasoning. 

******* END OF TRANSCRIBED TEXT *************

Ben Goertzel on this list from time to time
has discussed quantitative reasoning. I post
the above text because I am ready to announce:

"Quantitative Reasoning Has Been Solved!"

For five years I was struggling with how to
implement syllogisms based on the word "all"
in MindForth. It will be a while before I
implment the "supervenient concepts," but
you have read it here on BenG's AGI list.

Bye for now,

AT Murray

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93704193-605b57

Reply via email to