Hey, Ben. Arthur here. Remember the prediction of 
the 2012 Singularity? Well, guess what happened.
I just had the following conversation with my 
http://www.scn.org/~mentifex/mindforth.txt AI:

Human: boys make cars
Robot: CARS  WHAT ARE CARS

Human: boys make guns
Robot: GUNS  WHAT ARE GUNS

Human: boys
Robot: THE BOYS MAKE THE CARS

Human: boys
Robot: THE BOYS MAKE THE GUNS

Chief AGI guru Dr. Goertzel! The above is not 
a cherry-picked, post-mucho experimentation 
routine test result put out for PR purposes.
It just happened during hard-core AI coding.

Now, before everybody jumps in and snipes, 
"Aw, there's that crackpot Mentifex again," 
let me explain the significance of the I/O.

I have been coding MindForth AI for twelve 
years, since 1998 and maybe even earlier. 
Today on Mon.6.SEP.2010 for the first time 
I introduced quasi-neuronal inhibition into 
the free open-source AI source code. Why?

A year ago, around August or September of 
2009, you and I had our set-to (rumble?) 
concerning the AGI Roadmap and my posts 
there which were deleted ("rolled back")
by Itamar Arel. No biggy. I did not fix 
Itamar's wagon last Halloween, so I won't 
fix it this Halloween, either. You see, I 
was maintaining my own AI Roadmap at 
http://code.google.com/p/mindforth/wiki/RoadMap
concurrently with my contributions to 
you guys' Roadmap. 

The main thing is, I was entering into 
the Roadmap Milestone of trying to achieve 
"self-referential thought" with my AI.
That particular achievement requires 
covering a lot of ground, not just 
"you and I" interactions between the 
human user and the artificial AI Mind.
The AI needs to acquire a general knowledge 
of the surrounding world, so that man and 
machine may discuss the AI as a participant 
in its world.

So at the end of 2009 I was coding the 
ability of the AI to respond to who-queries 
and what-queries, so that the AI can deal 
with questions like "Who are you?" and 
"What are you?"

Recently I have perceived the need to 
get the AI to respond with multiple answers 
to queries about topics where the AI knows 
not a single fact but multiple facts, 
such as, "What do robots make?" I want 
the AI to be able to say such things as:

"Robots make cars."
"Robots make tools."
"Robots make parts."
"Robots make robots."

It dawned on me a few days ago that the 
AI software would have to suppress each 
given answer in order to move on to the 
next answer available in the knowledge 
base (KB). In other words, for the first 
time ever, I had to code _inhibition_ 
into the AI Mind. Tonight I have done 
so, and that simple conversation near the 
top of this message shows the results.

The same query, of just the word "boys...",
elicits two different answers from the KB 
because each response from the AI goes 
immediately into inhibition in such a way 
as to allow access to the next fact 
queued up in the recesses of the AI KB.

This "Singularity Alert" from Mentifex 
may generate a collective "Huh?" from 
the list readership, but here it is.

Bye for now (and back to the salt mines :-)

Arthur
-- 
http://AiMind-i.com
http://code.google.com/p/mindforth 
http://doi.acm.org/10.1145/307824.307853
http://robots.net/person/AI4U/diary/40.html


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to