Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Richard Loosemore

Charles D Hixson wrote:

Richard Loosemore wrote:

J Storrs Hall, PhD wrote:

On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it 
will be 

before
a robot, with your system as its controller, can walk into the 
average suburban home, find the kitchen, make coffee, and serve it?

Eight years.

My system, however, will go one better:  it will be able to make a 
pot of the finest Broken Orange Pekoe and serve it.


In the average suburban home? (No fair having the robot bring its own 
teabags, (or would it be loose tea and strainer?)  or having a coffee 
machine built in, for that matter). It has to live off the land...


Nope, no cheating.

My assumptions are these.

1)  A team size (very) approximately as follows:

- Year 1:   10
- Year 2:   10
- Year 3:   100
- Year 4:   300
- Year 5:   800
- Year 6:   2000
- Year 7:   3000
- Year 8:   4000

2)  Main Project(s) launched each year:

- Year 1:   AI software development environment
- Year 2:   AI software development environment
- Year 3:   Low-level cognitive mechanism experiments
- Year 4:   Global architecture experiments;
Sensorimotor integration
- Year 5:   Motivational system and development tests
- Year 6:   (continuation of above)
- Year 7:   (continuation of above)
- Year 8:   Autonomous tests in real world situations

The tests in Year 8 would be heavily supervised, but by that stage it 
should be possible for it to get on a bus, go to the suburban home, 
put the kettle on (if there was one: if not, go shopping to buy 
whatever supplies might be needed), then make the pot of tea (loose 
leaf of course:  no robot of mine is going to be a barbarian tea-bag 
user) and serve it.



FWIW, the average suburban home around here has coffee, but not tea.  So 
you've now added the test of shopping in a local supermarket.  I don't 
believe it.  Not in eight years.  It wouldn't be allowed past the cash 
register without human help.


Note that this has nothing to do with how intelligent the system is.  
Maybe it would be intelligent enough, if it's environment were sane.  
But a robot?  Either it would be seen as a Hollywood gimmick, or people 
would refuse to deal with it.


Robots will first appear in controlled environments.  Hospitals, home, 
stockrooms...other non-public-facing environments.  (I'm excluding 
non-humanoid robots.  Those, especially immobile forms, won't have the 
same level of resistance.)


Well, I am not talking about the event proceeding without anyone 
noticing:  I assume it will be done as a demonstration, so what the 
robot looks like will not matter.  I imagine it would be followed  by a 
press mob.


The point is only whether the system could manage the problems involved 
in doing the shopping and then making the tea.


And I think that other things will be happening at the same time anyway: 
 I suspect that new medicines will already be coming out of the lab, 
from an immobile version of the same system.  So if people are skittish 
about the tea-making robot, they will at least see that there are other, 
obviously beneficial products on the way.


Really, though, the question is whether such a system could be built, 
from the technical point of view.  My only point is that IF the 
resources were available, it could be done.  That is based on my 
understanding of the timeline for my own project.




Richard Loosemore








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93208591-d770cb


[agi] History of MindForth

2008-02-10 Thread A. T. Murray
From the rewrite-in-progress of the User Manual --

1.2 History of MindForth

In the beginning was Mind.REXX on the Commodore Amiga, 
which the author Mentifex began coding in July of 1993, 
and publicizing in the Usenet comp.lang.rexx newsgroup. 
The late Pushpinder Singh of MIT sent e-mail expressing 
his amazement that anyone would try to do AI in REXX. 
Mentifex mailed back the entire Mind.REXX source code. 
Another fellow, an IBM mainframe programmer, tried to 
port the Amiga Rexxmind to run on his IBM mainframe -- 
which would have been a Kitty-Hawk-to-Concorde leap -- 
but the REXX AI code was not fit for IBM consumption. 
When Mind.REXX thought its first thought in late 1994, 
Mentifex posted news of the event in Usenet newsgroups 
for many of the most significant programming languages. 
Only the Forth community took up the AI challenge and 
expressed any interest in translating the AI program. 
A maker of Forth chips gave advice and counsel, and 
a maker of robots requested a copy of Mind.REXX for 
porting into the Forth in which he programmed his robots. 
Sorely disappointed at not having established a colony 
of AI Minds on IBM mainframes, Mentifex resolved to 
learn Forth on his own and assist in the porting of 
Mind.REXX into Mind.Forth for use in amateur robotics. 

Mentifex bought a copy of Starting Forth at a used 
book store and recorded his pilgrim's progress in the first 
volume of the Mind.Forth Programming Journal (MFPJ). 
The amateur robot-maker, a professional engineer, flew 
to Seattle on business with Boeing and visited Mentifex 
in his Vaierre apartment with a lesson on Forth coding. 
Another engineer, formerly with IBM and a REXX expert 
who had helped Mentifex in the coding of Mind.REXX AI, 
flew to the Bay area for a REXX conference at S.L.A.C. 
and was treated to dinner by the maker of Forth chips. 
Unfortunately, Mentifex did not try hard enough to learn 
Forth and the Forthmind project languished in 1996 and 
1997 -- while Netizens were attacking Mentifex for daring 
to claim that he had developed a theory of mind for AI. 
It gradually dawned on Mentifex that in every Usenet 
newsgroup related to AI or robotics, there was always 
one fellow who considered himself the ultimate authority 
on the subject matter of the newsfroup, and woe unto 
anyone, especially an independent scholar like Mentifex, 
who dared to make an extraordinary scientific claim (ESC) 
on so grave a matter as announcing actual progress in AI. 
When the alpha male of comp.robotics.misc (a really cool 
guy, by the way) bracchiated over to Mentifex in the group 
in 1997 and launched an unseemingly vicious ad hominem 
attack, Mentifex knew not how to defend himself and was 
overcome with feelings of immense gratitude when the foxie 
Forth chip maker smote the troublemaker a mighty blow in 
defense of Mentifex. Forthwith Mentifex took up Forth again 
and devoted the entire year of 1998 to porting Mind.REXX 
into the native language of telescopes and robots -- Forth. 

In Mind.REXX, Mentifex had gone overboard in creating 
variables for even the slightest chance that they might 
turn out to be useful. Nobody had ever written a True AI 
before, it was all uncharted territory, and it seemed 
better to err on the side of too many variables rather 
than too few. In Forth, however, variables are anathema. 
Forthers prefer to put a value on the stack instead of 
in a variable. Mentifex never became a genuine, maniacally 
obsessive Forth programmer, but chose to program his AI 
in Forth code that looked enough like other languages to 
be easy to understand and to be easy to port from Forth. 

While Mentifex moved his AI coding efforts from MVP-Forth 
on the Amiga to F-PC on IBM clones and finally to Win32Forth, 
he also in 2001 (a space odyssey) suddenly ported MindForth 
into JavaScript so that users could just click on a link 
and have the Tutorial AI Mind flit across the 'Net and 
and take up albeit brief residence on their MSIE computer. 
While Push Singh was simply amazed at doing AI in REXX, 
many Netizens openly laughed and sneered at the idea of 
coding an AI Mind in JavaScript, which was not by any means 
a traditional AI language. Mentifex, however, suspected 
that his Mind.html in JavaScript was slowly building the 
largest installed user base of any AI program in the world, 
because it was so easy to save-to-disk the Mind.html code 
and because Site Meter logs reported the spread of the AI. 
Mentifex fell into the practice of switching back and forth 
between coding AI in JavaScript for a while and then in Forth. 

In March of 2005 Mentifex began coding powerful diagnostic 
routines into MindForth. He began to find and eliminate bugs 
that he could not deal with earlier because he had not even 
suspected their existence. Meanwhile, Mr. Frank J. Russo 
began to code what became http://AIMind-i.com -- a version 
of the Forthmind with its own site on the Web and with 
special abilities far beyond those of 

Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Matt Mahoney
It seems we have different ideas about what AGI is.  It is not a product that
you can make and sell.  It is a service that will evolve from the desire to
automate human labor, currently valued at $66 trillion per year.

I outlined a design in http://www.mattmahoney.net/agi.html
It consists of lots of narrow specialists and an infrastructure for routing
messages to the right experts.  Nobody will control it or own it.  I am not
going to build it.  It will be more complex than any human is capable of
understanding.  But there is enough economic incentive that it will be built
in some form, regardless.

The major technical obstacle is natural language modeling, which is required
by the protocol.  (Thus, my research in text compression).  I realize that a
full (Turing test) model can only be learned by having a full range of human
experiences in a human body.  But AGI is not about reproducing human form or
human thinking.  We used human servants in the past because that was what was
available, not because it was the best solution.  The problem is not to build
a robot to pour your coffee.  The problem is time, money, Maslow's hierarchy
of needs.  A solution could just as well be coffee from a can, ready to drink.


--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 Hmmm. I'd suspect you'd spend all your time and effort organizing the
 people. 
 Orgs can grow that fast if they're grocery stores or something else the new 
 hires already pretty much understand, but I don't see that happening
 smoothly 
 in a pure research setting.
 
 I'd claim to be able to do it in 10 years with 30 people with the following 
 provisos:
 1. same 30 people the whole time
 2. ten teams of 3: researcher, programmer, systems guy
 3. all 30 have IQ  150
 4. big hardware budget, all we build is software
 
 ... but I expect that the hardware for a usable body will be there in 10 
 years, so just buy it.
 
 Project looks like this:
 
 yrs 1-5: getting the basic learning algs worked out and running
 yrs 6-10: teaching the robot to walk, manipulate, balance, pour, understand 
 kitchens, make coffee
 
 It's totally worthless to build a robot that had to be programmed to be able
 
 to make coffee. One that can understand how to do it by watching people do 
 so, however, is absolutely the key to an extremely valuable level of 
 intelligence.
 
 Josh
 
 On Friday 08 February 2008 11:46:51 am, Richard Loosemore wrote:
  My assumptions are these.
  
  1)  A team size (very) approximately as follows:
  
   - Year 1:   10
   - Year 2:   10
   - Year 3:   100
   - Year 4:   300
   - Year 5:   800
   - Year 6:   2000
   - Year 7:   3000
   - Year 8:   4000
  
  2)  Main Project(s) launched each year:
  
   - Year 1:   AI software development environment
   - Year 2:   AI software development environment
   - Year 3:   Low-level cognitive mechanism experiments
   - Year 4:   Global architecture experiments;
   Sensorimotor integration
   - Year 5:   Motivational system and development tests
   - Year 6:   (continuation of above)
   - Year 7:   (continuation of above)
   - Year 8:   Autonomous tests in real world situations
  
  The tests in Year 8 would be heavily supervised, but by that stage it 
  should be possible for it to get on a bus, go to the suburban home, put 
  the kettle on (if there was one: if not, go shopping to buy whatever 
  supplies might be needed), then make the pot of tea (loose leaf of 
  course:  no robot of mine is going to be a barbarian tea-bag user) and 
  serve it.
  
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Bob Mottram
For the immediate future I think we are going to be seeing robots
which are either directly programmed to perform tasks (expert systems
on wheels) or which are taught by direct human supervision.

In the human supervision scenario the robot is walked through a
series of steps which it has to perform to complete a task.  This
could mean manually guiding its actuators, but the most practical way
to do this is via teleoperation.  So, after a a few supervised
examples the robot is able to perform the same task autonomously,
abstracting out variations in human performance.  This type of
training already goes on for industrial applications.  Seegrid have a
technology which they call walk through then work.  Within the next
ten years or so I think what we're going to see is this type of
automation gradually moving into the consumer realm due to the falling
price/performance ratio.  This doesn't necessarily mean AGI in your
home, but it does mean a lot of things will change.

The idea that robotics is only about software is fiction.  Good
automation involves cooperation between software, electrical and
mechanical engineers.  In some cases problems are much better solved
electromechanically than by software.  For example, no matter how
smart the software controlling it, a two fingered gripper will only be
able to deal with a limited sub-set of manipulation tasks.  Likewise a
great deal of computation can be avoided by introducing variable
compliance, and making clever use of materials to juggle energy around
the system (biological creatures use these tricks all the time).  Some
aspects of the problem are within the realm of pure software, such as
visual perception, navigation and mapping.  Also, the idea that you
can suspend real world testing until the end of the project is a
recipe for disaster, unless your environment simulators are highly
realistic, which at present involves substantial computing power.

For more intelligent types of learning by imitation you really have to
get into the business of mirror neurons, and ideas of selfhood.  This
means having the robot learn its own dynamics and being able to find
mappings between these and the dynamics of objects which it observes.
However, this can only be achieved if good perception systems are
already developed and working.


On 10/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Hmmm. I'd suspect you'd spend all your time and effort organizing the people.
 Orgs can grow that fast if they're grocery stores or something else the new
 hires already pretty much understand, but I don't see that happening smoothly
 in a pure research setting.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Samantha Atkins
Personally I would rather shoot for a world where the ever present 
nano-swarm saw that I wanted a cup of good coffee and effectively 
created one out of thin air on the spot, cup and all.  Assuming I still 
took pleasure in such archaic practices and ways of changing my internal 
state of course. :-)


I am not well qualified to give a good guess on the original question.  
But given the intersection of current progress in general environment 
comprehension and navigation, better robotic bodies, common sense 
databases, current task training by example and guesses on learning 
algorithm advancement I would be surprised if a robot with such ability 
was more than a decade out.  


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Bob Mottram
On 10/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
 It seems we have different ideas about what AGI is.  It is not a product that
 you can make and sell.  It is a service that will evolve from the desire to
 automate human labor, currently valued at $66 trillion per year.

Yes.  I think the best way to think about the sort of robotics that we
can reasonably expect to see in the near future is as physical
artifacts which provide a service.  Most robotics intelligence will be
provided as remotely hosted services, because this means that you can
build the physical machine very cheaply with minimal hardware onboard,
and also to a large extent make it future-proof.  It also enables the
kinds of collective subconscious which Ben has talked about in the
context of Second Life agents.  As more computational intelligence
comes online a dumb robot just subscribes to the new service (at a
cost to the user, of course) and with no hardware changes it's
suddenly smarter and able to do more stuff.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:

 Matt: I realize that a
 full (Turing test) model can only be learned by having a full range of human
 experiences in a human body.
 
 Pray expand. I thought v. few here think that. Your definition seems to 
 imply AGI must inevitably be embodied.  It also implies an evolutionary 
 model of embodied AGI - - a lower intelligence animal-level model will have 
 to have a proportionately lower agility animal body. It also prompts the v. 
 interesting speculation - (and has it ever been discussed on either 
 forum?) - of what kind of superbody a superagi would have to have?  (I would
 personally find *that* area of future speculation interesting if not super).
 
 Thoughts there too? No superhero fans around? 

A superagi would have billions of sensors and actuators all over the world --
keyboards, cameras, microphones, speakers, display devices, robotic
manipulators, direct brain interfaces, etc.

My claim is that an ideal language model (not AGI) requires human embodiment. 
But we don't need -- or want -- an ideal model.  Turing realized that passing
the imitation game requires duplicating human weaknesses as well as strengths.
 From his famous 1950 paper:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1.
It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

Why would we want to do that?  We can do better.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] What is MindForth?

2008-02-10 Thread Joseph Gentle
On Feb 9, 2008 11:53 PM, A. T. Murray [EMAIL PROTECTED] wrote:
 It is not a chatbot.
 The AI engine is arguably the first True AI. It is immortal.


Cool!

What has it done to convince you that its truly intelligent?

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] What is MindForth?

2008-02-10 Thread A. T. Murray
Joseph Gentle wrote on Sun, 10 Feb 2008, in a message now at
http://www.mail-archive.com/agi@v2.listbox.com/msg09803.html

 On Feb 9, 2008 11:53 PM, A. T. Murray [EMAIL PROTECTED] wrote:
 It is not a chatbot.
 The AI engine is arguably the first True AI. It is immortal.


 Cool!

 What has it done to convince you that its truly intelligent?

 -J

Intelligent means understanding.

When MindForth receives a sentence of English input 
(in the proper subject-verb-object format, for now),
it understands the sentence by creating concept-nodes 
for the English words and by creating associative tags 
to link one concept to another. Thus the AI Mind 
knows the information asserted by the English 
sentence, and can include the asserted idea in 
its own thinking.

Now for a miniature progress report on Mentifex AI.

http://mind.sourceforge.net/audstm.html 
has been updated with a name-change to 
audSTM Auditory Short Term Memory module 
of free open-source MindForth True AI
with the complete Table of Contents of 
the Mind.Forth User Manual listed at 
page-bottom with active URL-links.

We shall see if happenstance websurfers 
decide to try out any of the AI features
as listed in the Mind.Forth User Manual.

Gentlemen, mesdames, brace yourselves for 
a ballooning Technological Singularity.

ATM
--
http://mind.sourceforge.net/mind4th.html
http://mind.sourceforge.net/m4thuser.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Mike Tintner

Matt: I realize that a
full (Turing test) model can only be learned by having a full range of human
experiences in a human body.

Pray expand. I thought v. few here think that. Your definition seems to 
imply AGI must inevitably be embodied.  It also implies an evolutionary 
model of embodied AGI - - a lower intelligence animal-level model will have 
to have a proportionately lower agility animal body. It also prompts the v. 
interesting speculation - (and has it ever been discussed on either 
forum?) - of what kind of superbody a superagi would have to have?  (I would 
personally find *that* area of future speculation interesting if not super). 
Thoughts there too? No superhero fans around? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f