In response to Richard Loosemore below, > >A. T. Murray wrote: >> MindForth free open AI source code on-line at >> http://mentifex.virtualentity.com/mind4th.html >> has become a True AI-Complete thinking mind >> after years of tweaking and debugging. >> >> On 22 January 2008 the AI Forthmind began to >> think effortlessly and almost flawlessly in >> loops of meandering chains of thought. >> >> Users are invited to download the AI Mind >> and decide for themselves if what they see >> is machine intelligence and thinking. The >> http://mentifex.virtualentity.com/m4thuser.html >> User Manual explains all the steps involved. >> >> MindForth is the Model-T of True AI software, >> roughly comparable to the state of the art in >> automobiles one hundred years ago in 1908. >> As such, the AI in Forth will not blow you >> away with any advanced features, but will >> subtly show you the most primitive display >> of spreading activation among concepts. >> >> The world's first publicly available True AI >> achieves meandering chains of thought by >> "detouring" away from incomplete ideas >> lacking knowledge-base data and by asking >> questions of the human user when the AI is >> unable to complete a sentence of thought. >> >> The original MindForth program has spawned >> http://AIMind-I.com as the first offspring >> in the evolution of artificial intelligence. >> >> ATM/Mentifex > > Okay now you got my attention. > > Arthur: what has it achieved with its thinking?
Up until Tues.22.JAN.2008 (four days ago) the AI would always encounter some bug that derailed its thinking. But starting three years ago in March of 2005 I coded extensive "diagnostic" routines into MindForth. Gradually it stopped spouting gibberish (a frequent complaint against Mentifex AI), but still countless bugs kept popping up that I had to deal with one after another. Suddenly on 22.JAN.2008 there were no "show-stopper" bugs anymore -- just glitches in need of improvement. > > Can you show an example of its best cogitations? You can tell it a multitude of subject-verb-object (SVO) facts, and then you can query it in various ways. Now the following thing is a very new development. Six years ago, when I was gearing up to publish AI4U, my goal for the AI output was (then) that it should parrot back each sentence of input, because, after all, each SVO concept had been activated by the mere fact of input. A few weeks ago, that goal changed to what the AI does now -- it briefly activates only one concept at a time, of either input or reentrant output. So now if you enter "cats eat fish", the AI briefly activates each concept, coming to rest on the FISH concept (which is new to the AI). Immediately the SVO mind-module starts to generate a sentence about the active FISH concept, but the verbPhrase module fails to find a suffciently active verb. The "detour" variable then detours the thought process all the way up the Chomskyan syntactic superstructure to the SVO module, or the English module even higher, or maybe to the Think module higher still (I don't remember without inspecting the code), where the detour-flag calls the whatAuxSDO (what-do-Subjects-do) module to ask the human user a question about FISH. As the AI stands right now today since 24.JAN.2008, the output will look like FISH WHAT DO FISH DO If the human user (or person in job category "attendant") answers the question, then the AI knows one more fact, and continues the dialogue with the human user. But (and this is even more interesting) if the human user just sits there to watch the AI think and does not answer the question, the AI repeats the question a few times. Then, in a development I coded also on Tues.22.JAN.2008 because the AI display was so bland and boring, a "thotnum" (thought-number) system detects the repetitious thought inherent in the question, and diverts the train of thought to the EGO self-resuscitation module, which activates the oldest "post-vault" concept in the self-rejuvenating memory of the AI Mind. Right now the AI just blurts out the name of the oldest concept (say, CATS) and I need to code in some extra activation to get a sentence going. But if you converse with the AI using known words or if you answer all queries about unknown words, you and the AI gradually fill its knowledge base with SVO-type facts -- not a big ontology like in the Cyc that Stephen Reed worked on, but still a large domain of subject-verb-object possibilities. You may query the KB in several ways, e.g.: what do cats eat cats cats eat and so forth, entered as a line of user input. > > If it is just producing "meandering chains of thought" > then this is not AI, because randome chains of thought > are trivially easy to produce (it was done already in > the 1960s). The difference here in January of 2008 is that the words forming the thoughts are conceptualized, and thought in MindForth occurs only by "spreading activation." Eventually there will be fancier forms of thought, such as prepositional phrases, but in this "Model-T of artificial intelligence" the initial goal was to get the very most basic AI up and running. > > Richard Loosemore Richard, thank you for taking the time to ask your above questions about MindForth. I would like to answer all such questions to the best of my ability, because even to be proved wrong is to learn something. Bye for now. Sincerely, Arthur T. Murray ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=90255732-61e6a2
