Arthur,

Can it represent negatives? Time? Textures? Relationships? Distinguish homonyms from context? Represent the concept of a homonym? Represent itself? Can it handle deixis?

More importantly, do you have any prinicipled reason for claiming that it will soon be able to handle any of these things, other than your statement of optimism "If robot builders were to add sensory and motor routines to Mind.Forth, the AI would flesh out its conceptual knowledge and interact with the world."?

So far, what you describe looks like something I wrote in Basic on a Sinclair Spectrum computer in 1982.

Richard Loosemore







A. T. Murray wrote:
In Vernor Vinge's classic paper on Technological Singularity:
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected -- perhaps even to the researchers involved. ("But all our previous models were catatonic! We were just tweaking some parameters....")

Very truly yours Mentifex here was "tweaking some parameters" last Wednesday and suddenly the AI Mind in Win32Forth woke up.

It was not an accident. It was a milestone after years of coding
and debugging an AI Mind based on a linguistic theory of mind.

Mind.Forth thinks on the basis of "spreading activation." When a pre-verbal concept in the AI Forthmind becomes active
enough to initiate the generation of a sentence of thought in
English, the various "Think" modules not only follow the chain of activation snaking across the conceptual mindgrid, but they also "midwife" or push the nascent thought as it seeks out its own as-it-were birth-canal. As an example, let's take the startling exclamation by an AI Mind of the sentence: "Ben writes books."

When the concept "Ben" comes to mind, the AI has selected that concept because it was momentarily the most active concept on the entire conscious mindgrid. (Perhaps a camera saw "Ben".)
Having reactivated the concept of Ben, the linguistic AI Mind
lets activation spread from instantiation nodes on the diachronic Ben-concept to various verbs that have been associated in the past with knowledge of Ben's activities. For whatever reason, the verb "write" comes to mind in association with the concept of the persona of Ben. And what does Ben write? Letters, AGI posts, poems, rants, shopping lists, extortion notes, BOOKS!

You might think that it is easy to create software that associates "Ben... writes... books," but until Wed.7.JUN.2006 it was not that simple. (Now any AI Lab anywhere can do it.)

In 2002 I published a book "AI4U" (to compete with "AIMA") that contained my JavaScript Mind.html code as Appendix A. The code was buggy and created English gibberish as output. The output did not start out as gibberish, but it quickly degraded into gibberish as the AI made spurious associations. Mind.Forth -- the precursor to Mind.html -- had the same bugs, but last year in 2005 I inserted powerful diagnostic routines into Mind.Forth and I gradually removed the very worst bugs. On 16 March 2005 I also made it possible to press Tab on the keyboard in order to cycle through such display modes as (currently) Normal; Transcript; Tutorial; and Diagnostic.

The Tutorial mode was scrolling voluminous messages so rapidly down the screen, that the user had to kill the AI to read it. On 9 November 2005 I consolidated the Tutorial horizontally.

The idea behind the Tutorial mode was to show the deep internal thinking process of the artificial mind. By tweaking the same code that created the output of the Mind, in Tutorial mode I was able to show the range of possible thoughts available to the AI and the forces at play in determining pathways of thought across the conceptual mindgrid. One golden goal, however, eluded me until now -- showing the slosh-over effect.

In Mentifex AI lore, the slosh-over effect is what happens when you think by activating two concepts ("Ben... writes...") and the spreading activation builds up so strongly on the second concept (the verb) that it "sloshes over" onto the third concept -- the direct object in "Ben writes books."

The tweaking that I did last Wednesday was simply to make a subject-noun concept start out with an equal activation at all its nodes and pass a spike of activation to verbs -- before the selection of the verb to go with the subject. When the verb is selected, an increment of activation is added to all its nodes, including any nodes on which a modicum of "spike" activation has already been deposited. When each node on the verb-concept fires its own spike of relative activation over to an associated object-noun, the stage is set for the AI Mind to pick a valid object -- not the spurious associations which engender AI gibberish.

There are still some bugs to be worked out, so that Mind.Forth will home in unerringly on logical thoughts. But it is already possible to download Mind.Forth and Tab on into Tutorial mode where slosh-over is visible.

You won't be alone in your testing of Mind.Forth AI.
The recent access logs are showing many visits to http://mind.sourceforge.net/mind4th.html without even a referral. Some Netizens who search the Web for "artificial intelligence download" find the AI.

To pre-empt a criticism that may occur to you, I would like to state that what Mind.Forth achieves is not comparable to a database or a look-up table. The knowledge base (KB) of Mind.Forth consists of concepts which interact by spreading activation. If robot builders were to add sensory and motor routines to Mind.Forth, the AI would flesh out its conceptual knowledge and interact with the world.

Respectfully submitted,

Arthur T. Murray
--
http://www.blogcharm.com/Singularity/25603/Timetable.html
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to