Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread Samantha Atkins

Bob Mottram wrote:

On 10/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
  

It seems we have different ideas about what AGI is.  It is not a product that
you can make and sell.  It is a service that will evolve from the desire to
automate human labor, currently valued at $66 trillion per year.



Yes.  I think the best way to think about the sort of robotics that we
can reasonably expect to see in the near future is as physical
artifacts which provide a service.  Most robotics intelligence will be
provided as remotely hosted services, because this means that you can
build the physical machine very cheaply with minimal hardware onboard,
and also to a large extent make it future-proof.
I can see this for managing the download/installation of capabilities 
with periodic feedback of experience.   It is less likely that 
centralized systems would effectively teleoperate large numbers of 
remote robots.   The bandwidth and complexity would go up rapidly.  


  It also enables the
kinds of collective subconscious which Ben has talked about in the
context of Second Life agents.  As more computational intelligence
comes online a dumb robot just subscribes to the new service (at a
cost to the user, of course) 
What for?  It may be part of the selling point of general robotics that 
your unit gains abilities at no additional charge over time. 


and with no hardware changes it's
suddenly smarter and able to do more stuff.
  
Ugly things like Sarbannes-Oxley accounting rules could come into play 
limiting what sorts of mods are allowed or how they are priced. 


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread J Storrs Hall, PhD
It's worth noting in this connection that once you get up to the level of 
mammals, everything is very high compliance, low stiffness, mostly serial 
joint architecture (no natural Stewart platforms, although you can of course 
grab something with two hands if need be) typically with significant energy 
storage in the power train (i.e. springs). This means that the control has to 
be fully Newtonian, something most commercial robotics haven't gotten up to 
yet.

I think that state of the art is just now getting to dynamically-stable-only 
biped walkers. I've seen a couple of articles in the past year, but it 
certainly isn't widespread, and it remains to be seen how real.

Josh

On Sunday 10 February 2008 04:35:13 pm, Bob Mottram wrote:

 The idea that robotics is only about software is fiction.  Good
 automation involves cooperation between software, electrical and
 mechanical engineers.  In some cases problems are much better solved
 electromechanically than by software.  For example, no matter how
 smart the software controlling it, a two fingered gripper will only be
 able to deal with a limited sub-set of manipulation tasks.  Likewise a
 great deal of computation can be avoided by introducing variable
 compliance, and making clever use of materials to juggle energy around
 the system (biological creatures use these tricks all the time). 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread Bob Mottram
On 11/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I think that state of the art is just now getting to dynamically-stable-only
 biped walkers. I've seen a couple of articles in the past year, but it
 certainly isn't widespread, and it remains to be seen how real.


Famous robots such as ASIMO are far less energy efficient than humans
in bipedal locomotion.  The passive/dynamic aproach has become more
popular in recent years, with some research robots approaching human
levels of energy efficiency.

http://www-personal.umich.edu/~shc/robots.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread Richard Loosemore

Bob Mottram wrote:

On 11/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

But now, by contrast, if you are assuming (as Matt does, I believe) that
somehow a cluster of sub-intelligent specialists across the net will
gradually increase in intelligence until their sum total amounts to a
full AI, then you are begging some enormous questions.


The army of experts is only one possibility.  Probably like most
people on this list I think producing more intelligent machines is
going to require a more closely integrated cognitive architecture.
Integration however doesn't mean that the system has to reside on a
single computer or physical device.


No, agreed:  what I was really arguing against was a scenario that comes 
up frequently, in which AI is achieved by accident, so to speak, as a 
lot of expert systems gradually accumulate in the net.


There is no reason, as you say, why someone should not design a complete 
AI system that was distributed.  In practice, I think that any 
organization that will have the wherewithal to do that will take firm 
steps to keep it in house.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread Richard Loosemore

J Storrs Hall, PhD wrote:
Hmmm. I'd suspect you'd spend all your time and effort organizing the people. 
Orgs can grow that fast if they're grocery stores or something else the new 
hires already pretty much understand, but I don't see that happening smoothly 
in a pure research setting.


I would not be organizing the company:  my COO will do that.

My plan is predicated on a particular research approach, and that 
research would not be like putting a few hundred conventional AI 
people together and trying to herd them (I shudder at the thought).


I have a very specific plan already worked out, so I would hire 
specialists capable of taking on each component of the work.


In that sense, it would be 10% research and 90% implementation.  About 
exactly the reverse of what you would get in a univiersity AI department.


In fact, the situation is superfically similar to Doug Lenat's approach: 
  he decided on a plan, then hired people to carry out the specific 
plan, with only (I am guessing... Stephen?) 10% research, whil ethe 
other 90% was about ontologizing.



I'd claim to be able to do it in 10 years with 30 people with the following 
provisos:

1. same 30 people the whole time
2. ten teams of 3: researcher, programmer, systems guy
3. all 30 have IQ  150
4. big hardware budget, all we build is software

... but I expect that the hardware for a usable body will be there in 10 
years, so just buy it.


Project looks like this:

yrs 1-5: getting the basic learning algs worked out and running
yrs 6-10: teaching the robot to walk, manipulate, balance, pour, understand 
kitchens, make coffee


It's totally worthless to build a robot that had to be programmed to be able 
to make coffee. One that can understand how to do it by watching people do 
so, however, is absolutely the key to an extremely valuable level of 
intelligence.


100% agreement on that.


Richard Loosemore



Josh

On Friday 08 February 2008 11:46:51 am, Richard Loosemore wrote:

My assumptions are these.

1)  A team size (very) approximately as follows:

 - Year 1:   10
 - Year 2:   10
 - Year 3:   100
 - Year 4:   300
 - Year 5:   800
 - Year 6:   2000
 - Year 7:   3000
 - Year 8:   4000

2)  Main Project(s) launched each year:

 - Year 1:   AI software development environment
 - Year 2:   AI software development environment
 - Year 3:   Low-level cognitive mechanism experiments
 - Year 4:   Global architecture experiments;
 Sensorimotor integration
 - Year 5:   Motivational system and development tests
 - Year 6:   (continuation of above)
 - Year 7:   (continuation of above)
 - Year 8:   Autonomous tests in real world situations

The tests in Year 8 would be heavily supervised, but by that stage it 
should be possible for it to get on a bus, go to the suburban home, put 
the kettle on (if there was one: if not, go shopping to buy whatever 
supplies might be needed), then make the pot of tea (loose leaf of 
course:  no robot of mine is going to be a barbarian tea-bag user) and 
serve it.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Bob Mottram wrote:
  On 11/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
  But now, by contrast, if you are assuming (as Matt does, I believe) that
  somehow a cluster of sub-intelligent specialists across the net will
  gradually increase in intelligence until their sum total amounts to a
  full AI, then you are begging some enormous questions.
  
  The army of experts is only one possibility.  Probably like most
  people on this list I think producing more intelligent machines is
  going to require a more closely integrated cognitive architecture.
  Integration however doesn't mean that the system has to reside on a
  single computer or physical device.
 
 No, agreed:  what I was really arguing against was a scenario that comes 
 up frequently, in which AI is achieved by accident, so to speak, as a 
 lot of expert systems gradually accumulate in the net.
 
 There is no reason, as you say, why someone should not design a complete 
 AI system that was distributed.  In practice, I think that any 
 organization that will have the wherewithal to do that will take firm 
 steps to keep it in house.

The idea behind my distributed search/message posting service is an
infrastructure that motivates the provision of useful services.  It is an
economic system based on the currency of information, which has negative
value.  Peers compete for bandwidth and storage by providing value and
developing a reputation so that other peers will copy and forward its
messages.  AI doesn't just happen.  There is an incentive to make it happen.
 But no organization will control it.  It is too big for that.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://www.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Richard Loosemore

Charles D Hixson wrote:

Richard Loosemore wrote:

J Storrs Hall, PhD wrote:

On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it 
will be 

before
a robot, with your system as its controller, can walk into the 
average suburban home, find the kitchen, make coffee, and serve it?

Eight years.

My system, however, will go one better:  it will be able to make a 
pot of the finest Broken Orange Pekoe and serve it.


In the average suburban home? (No fair having the robot bring its own 
teabags, (or would it be loose tea and strainer?)  or having a coffee 
machine built in, for that matter). It has to live off the land...


Nope, no cheating.

My assumptions are these.

1)  A team size (very) approximately as follows:

- Year 1:   10
- Year 2:   10
- Year 3:   100
- Year 4:   300
- Year 5:   800
- Year 6:   2000
- Year 7:   3000
- Year 8:   4000

2)  Main Project(s) launched each year:

- Year 1:   AI software development environment
- Year 2:   AI software development environment
- Year 3:   Low-level cognitive mechanism experiments
- Year 4:   Global architecture experiments;
Sensorimotor integration
- Year 5:   Motivational system and development tests
- Year 6:   (continuation of above)
- Year 7:   (continuation of above)
- Year 8:   Autonomous tests in real world situations

The tests in Year 8 would be heavily supervised, but by that stage it 
should be possible for it to get on a bus, go to the suburban home, 
put the kettle on (if there was one: if not, go shopping to buy 
whatever supplies might be needed), then make the pot of tea (loose 
leaf of course:  no robot of mine is going to be a barbarian tea-bag 
user) and serve it.



FWIW, the average suburban home around here has coffee, but not tea.  So 
you've now added the test of shopping in a local supermarket.  I don't 
believe it.  Not in eight years.  It wouldn't be allowed past the cash 
register without human help.


Note that this has nothing to do with how intelligent the system is.  
Maybe it would be intelligent enough, if it's environment were sane.  
But a robot?  Either it would be seen as a Hollywood gimmick, or people 
would refuse to deal with it.


Robots will first appear in controlled environments.  Hospitals, home, 
stockrooms...other non-public-facing environments.  (I'm excluding 
non-humanoid robots.  Those, especially immobile forms, won't have the 
same level of resistance.)


Well, I am not talking about the event proceeding without anyone 
noticing:  I assume it will be done as a demonstration, so what the 
robot looks like will not matter.  I imagine it would be followed  by a 
press mob.


The point is only whether the system could manage the problems involved 
in doing the shopping and then making the tea.


And I think that other things will be happening at the same time anyway: 
 I suspect that new medicines will already be coming out of the lab, 
from an immobile version of the same system.  So if people are skittish 
about the tea-making robot, they will at least see that there are other, 
obviously beneficial products on the way.


Really, though, the question is whether such a system could be built, 
from the technical point of view.  My only point is that IF the 
resources were available, it could be done.  That is based on my 
understanding of the timeline for my own project.




Richard Loosemore








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93208591-d770cb


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Matt Mahoney
It seems we have different ideas about what AGI is.  It is not a product that
you can make and sell.  It is a service that will evolve from the desire to
automate human labor, currently valued at $66 trillion per year.

I outlined a design in http://www.mattmahoney.net/agi.html
It consists of lots of narrow specialists and an infrastructure for routing
messages to the right experts.  Nobody will control it or own it.  I am not
going to build it.  It will be more complex than any human is capable of
understanding.  But there is enough economic incentive that it will be built
in some form, regardless.

The major technical obstacle is natural language modeling, which is required
by the protocol.  (Thus, my research in text compression).  I realize that a
full (Turing test) model can only be learned by having a full range of human
experiences in a human body.  But AGI is not about reproducing human form or
human thinking.  We used human servants in the past because that was what was
available, not because it was the best solution.  The problem is not to build
a robot to pour your coffee.  The problem is time, money, Maslow's hierarchy
of needs.  A solution could just as well be coffee from a can, ready to drink.


--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 Hmmm. I'd suspect you'd spend all your time and effort organizing the
 people. 
 Orgs can grow that fast if they're grocery stores or something else the new 
 hires already pretty much understand, but I don't see that happening
 smoothly 
 in a pure research setting.
 
 I'd claim to be able to do it in 10 years with 30 people with the following 
 provisos:
 1. same 30 people the whole time
 2. ten teams of 3: researcher, programmer, systems guy
 3. all 30 have IQ  150
 4. big hardware budget, all we build is software
 
 ... but I expect that the hardware for a usable body will be there in 10 
 years, so just buy it.
 
 Project looks like this:
 
 yrs 1-5: getting the basic learning algs worked out and running
 yrs 6-10: teaching the robot to walk, manipulate, balance, pour, understand 
 kitchens, make coffee
 
 It's totally worthless to build a robot that had to be programmed to be able
 
 to make coffee. One that can understand how to do it by watching people do 
 so, however, is absolutely the key to an extremely valuable level of 
 intelligence.
 
 Josh
 
 On Friday 08 February 2008 11:46:51 am, Richard Loosemore wrote:
  My assumptions are these.
  
  1)  A team size (very) approximately as follows:
  
   - Year 1:   10
   - Year 2:   10
   - Year 3:   100
   - Year 4:   300
   - Year 5:   800
   - Year 6:   2000
   - Year 7:   3000
   - Year 8:   4000
  
  2)  Main Project(s) launched each year:
  
   - Year 1:   AI software development environment
   - Year 2:   AI software development environment
   - Year 3:   Low-level cognitive mechanism experiments
   - Year 4:   Global architecture experiments;
   Sensorimotor integration
   - Year 5:   Motivational system and development tests
   - Year 6:   (continuation of above)
   - Year 7:   (continuation of above)
   - Year 8:   Autonomous tests in real world situations
  
  The tests in Year 8 would be heavily supervised, but by that stage it 
  should be possible for it to get on a bus, go to the suburban home, put 
  the kettle on (if there was one: if not, go shopping to buy whatever 
  supplies might be needed), then make the pot of tea (loose leaf of 
  course:  no robot of mine is going to be a barbarian tea-bag user) and 
  serve it.
  
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Bob Mottram
For the immediate future I think we are going to be seeing robots
which are either directly programmed to perform tasks (expert systems
on wheels) or which are taught by direct human supervision.

In the human supervision scenario the robot is walked through a
series of steps which it has to perform to complete a task.  This
could mean manually guiding its actuators, but the most practical way
to do this is via teleoperation.  So, after a a few supervised
examples the robot is able to perform the same task autonomously,
abstracting out variations in human performance.  This type of
training already goes on for industrial applications.  Seegrid have a
technology which they call walk through then work.  Within the next
ten years or so I think what we're going to see is this type of
automation gradually moving into the consumer realm due to the falling
price/performance ratio.  This doesn't necessarily mean AGI in your
home, but it does mean a lot of things will change.

The idea that robotics is only about software is fiction.  Good
automation involves cooperation between software, electrical and
mechanical engineers.  In some cases problems are much better solved
electromechanically than by software.  For example, no matter how
smart the software controlling it, a two fingered gripper will only be
able to deal with a limited sub-set of manipulation tasks.  Likewise a
great deal of computation can be avoided by introducing variable
compliance, and making clever use of materials to juggle energy around
the system (biological creatures use these tricks all the time).  Some
aspects of the problem are within the realm of pure software, such as
visual perception, navigation and mapping.  Also, the idea that you
can suspend real world testing until the end of the project is a
recipe for disaster, unless your environment simulators are highly
realistic, which at present involves substantial computing power.

For more intelligent types of learning by imitation you really have to
get into the business of mirror neurons, and ideas of selfhood.  This
means having the robot learn its own dynamics and being able to find
mappings between these and the dynamics of objects which it observes.
However, this can only be achieved if good perception systems are
already developed and working.


On 10/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Hmmm. I'd suspect you'd spend all your time and effort organizing the people.
 Orgs can grow that fast if they're grocery stores or something else the new
 hires already pretty much understand, but I don't see that happening smoothly
 in a pure research setting.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Samantha Atkins
Personally I would rather shoot for a world where the ever present 
nano-swarm saw that I wanted a cup of good coffee and effectively 
created one out of thin air on the spot, cup and all.  Assuming I still 
took pleasure in such archaic practices and ways of changing my internal 
state of course. :-)


I am not well qualified to give a good guess on the original question.  
But given the intersection of current progress in general environment 
comprehension and navigation, better robotic bodies, common sense 
databases, current task training by example and guesses on learning 
algorithm advancement I would be surprised if a robot with such ability 
was more than a decade out.  


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Bob Mottram
On 10/02/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
 It seems we have different ideas about what AGI is.  It is not a product that
 you can make and sell.  It is a service that will evolve from the desire to
 automate human labor, currently valued at $66 trillion per year.

Yes.  I think the best way to think about the sort of robotics that we
can reasonably expect to see in the near future is as physical
artifacts which provide a service.  Most robotics intelligence will be
provided as remotely hosted services, because this means that you can
build the physical machine very cheaply with minimal hardware onboard,
and also to a large extent make it future-proof.  It also enables the
kinds of collective subconscious which Ben has talked about in the
context of Second Life agents.  As more computational intelligence
comes online a dumb robot just subscribes to the new service (at a
cost to the user, of course) and with no hardware changes it's
suddenly smarter and able to do more stuff.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:

 Matt: I realize that a
 full (Turing test) model can only be learned by having a full range of human
 experiences in a human body.
 
 Pray expand. I thought v. few here think that. Your definition seems to 
 imply AGI must inevitably be embodied.  It also implies an evolutionary 
 model of embodied AGI - - a lower intelligence animal-level model will have 
 to have a proportionately lower agility animal body. It also prompts the v. 
 interesting speculation - (and has it ever been discussed on either 
 forum?) - of what kind of superbody a superagi would have to have?  (I would
 personally find *that* area of future speculation interesting if not super).
 
 Thoughts there too? No superhero fans around? 

A superagi would have billions of sensors and actuators all over the world --
keyboards, cameras, microphones, speakers, display devices, robotic
manipulators, direct brain interfaces, etc.

My claim is that an ideal language model (not AGI) requires human embodiment. 
But we don't need -- or want -- an ideal model.  Turing realized that passing
the imitation game requires duplicating human weaknesses as well as strengths.
 From his famous 1950 paper:

Q: Please write me a sonnet on the subject of the Forth Bridge.
A: Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1.
It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.

Why would we want to do that?  We can do better.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread Mike Tintner

Matt: I realize that a
full (Turing test) model can only be learned by having a full range of human
experiences in a human body.

Pray expand. I thought v. few here think that. Your definition seems to 
imply AGI must inevitably be embodied.  It also implies an evolutionary 
model of embodied AGI - - a lower intelligence animal-level model will have 
to have a proportionately lower agility animal body. It also prompts the v. 
interesting speculation - (and has it ever been discussed on either 
forum?) - of what kind of superbody a superagi would have to have?  (I would 
personally find *that* area of future speculation interesting if not super). 
Thoughts there too? No superhero fans around? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94603346-a08d2f


Re: [agi] Wozniak's defn of intelligence

2008-02-09 Thread Charles D Hixson

Richard Loosemore wrote:

J Storrs Hall, PhD wrote:

On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it 
will be 

before
a robot, with your system as its controller, can walk into the 
average suburban home, find the kitchen, make coffee, and serve it?

Eight years.

My system, however, will go one better:  it will be able to make a 
pot of the finest Broken Orange Pekoe and serve it.


In the average suburban home? (No fair having the robot bring its own 
teabags, (or would it be loose tea and strainer?)  or having a coffee 
machine built in, for that matter). It has to live off the land...


Nope, no cheating.

My assumptions are these.

1)  A team size (very) approximately as follows:

- Year 1:   10
- Year 2:   10
- Year 3:   100
- Year 4:   300
- Year 5:   800
- Year 6:   2000
- Year 7:   3000
- Year 8:   4000

2)  Main Project(s) launched each year:

- Year 1:   AI software development environment
- Year 2:   AI software development environment
- Year 3:   Low-level cognitive mechanism experiments
- Year 4:   Global architecture experiments;
Sensorimotor integration
- Year 5:   Motivational system and development tests
- Year 6:   (continuation of above)
- Year 7:   (continuation of above)
- Year 8:   Autonomous tests in real world situations

The tests in Year 8 would be heavily supervised, but by that stage it 
should be possible for it to get on a bus, go to the suburban home, 
put the kettle on (if there was one: if not, go shopping to buy 
whatever supplies might be needed), then make the pot of tea (loose 
leaf of course:  no robot of mine is going to be a barbarian tea-bag 
user) and serve it.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

FWIW, the average suburban home around here has coffee, but not tea.  So 
you've now added the test of shopping in a local supermarket.  I don't 
believe it.  Not in eight years.  It wouldn't be allowed past the cash 
register without human help.


Note that this has nothing to do with how intelligent the system is.  
Maybe it would be intelligent enough, if it's environment were sane.  
But a robot?  Either it would be seen as a Hollywood gimmick, or people 
would refuse to deal with it.


Robots will first appear in controlled environments.  Hospitals, home, 
stockrooms...other non-public-facing environments.  (I'm excluding 
non-humanoid robots.  Those, especially immobile forms, won't have the 
same level of resistance.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


[agi] Wozniak's defn of intelligence

2008-02-08 Thread J Storrs Hall, PhD
[ http://www.chron.com/disp/story.mpl/headline/biz/5524028.html ]

Steve Wozniak has given up on artificial intelligence.
What is intelligence? Apple's co-founder asked an audience of about 550 
Thursday at the Houston area's first Up Experience conference in Stafford.
His answer? A robot that could get him a cup of coffee.
You can come into my house and make a cup of coffee and I can go into your 
house and make a cup of coffee, he said. Imagine what it would take for a 
robot to do that.
It would have to negotiate the home, identify the coffee machine and know how 
it works, he noted.
But that is not something a machine is capable of learning — at least not in 
his lifetime, added Wozniak, who rolled onto the stage on his ever-present 
Segway before delivering a rapid-fire speech on robotics, his vision of 
robots in classrooms and the long haul ahead for artificial intelligence.

...

Any system builders here care to give a guess as to how long it will be before 
a robot, with your system as its controller, can walk into the average 
suburban home, find the kitchen, make coffee, and serve it?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549

Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it will be 
before 
a robot, with your system as its controller, can walk into the average 
suburban home, find the kitchen, make coffee, and serve it?

Eight years.

My system, however, will go one better:  it will be able to make a pot 
of the finest Broken Orange Pekoe and serve it.


In the average suburban home? (No fair having the robot bring its own teabags, 
(or would it be loose tea and strainer?)  or having a coffee machine built 
in, for that matter). It has to live off the land...


Nope, no cheating.

My assumptions are these.

1)  A team size (very) approximately as follows:

- Year 1:   10
- Year 2:   10
- Year 3:   100
- Year 4:   300
- Year 5:   800
- Year 6:   2000
- Year 7:   3000
- Year 8:   4000

2)  Main Project(s) launched each year:

- Year 1:   AI software development environment
- Year 2:   AI software development environment
- Year 3:   Low-level cognitive mechanism experiments
- Year 4:   Global architecture experiments;
Sensorimotor integration
- Year 5:   Motivational system and development tests
- Year 6:   (continuation of above)
- Year 7:   (continuation of above)
- Year 8:   Autonomous tests in real world situations

The tests in Year 8 would be heavily supervised, but by that stage it 
should be possible for it to get on a bus, go to the suburban home, put 
the kettle on (if there was one: if not, go shopping to buy whatever 
supplies might be needed), then make the pot of tea (loose leaf of 
course:  no robot of mine is going to be a barbarian tea-bag user) and 
serve it.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread Bob Mottram
On 08/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Any system builders here care to give a guess as to how long it will be before
 a robot, with your system as its controller, can walk into the average
 suburban home, find the kitchen, make coffee, and serve it?


Robots which can navigate in the home, knowing where the kitchen is,
are a near term prospect.  With simple navigation systems such as
northstar, commercially available robots will be able to do this
within a year, although they will require multiple projectors to cover
an entire house.

More sophisticated navigation and object recognition abilities will
require a less trivial approach using vision and possibly lasers
(although I don't see lasers playing a big part in the future of home
robotics).  I know there are commercially available robots which do
this already, but they're somewhat pricey and are typically confined
to factories so it may be a while before the price/performance comes
down to consumer levels.  This is something which I'm working on, and
I think a fairly conservative estimate is in the region of 5-10 years.
 With luck I'll have a working solution within the next few years.

Making and serving coffee is more difficult, and success in
recognising and handling objects will depend very much upon earlier
developments with navigation.  Perceiving objects in 3D requires very
similar algorithms to the SLAM methods used in navigation, just on a
smaller scale.  There will probably be a substantial amount of
crossover between robotic manipulation and the development of human
prosthetics, such as the recent luke arm.  Grabbing and holding
objects may actually be easier than it may appear, relying heavily
upon dense tactile sensing and passive compliance.  Loosely coupled
control of a compliant system seems to be the way that we handle many
things.  So I think competent manipulation is a longer term prospect
(maybe 10-20 years), but simpler forms of manipulation, such as
situations where the coffee maker is specially adapted for robotic
handling, will be available much sooner.

As a side note, once you have a domestic robot capable of making
coffee in a similar manner to the way that humans do it, then a large
amount of human labour will become obsolete fairly quickly since such
a machine could be applied to many other tasks currently done by
people.

I don't give Wozniak's robot prediction much credence.  The video just
seems like random, not especially informed, stream of consciousness
stuff and as far as I'm aware he doesn't have much knowledge of what's
going on in robotics or automation industries.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread Matt Mahoney
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 [ http://www.chron.com/disp/story.mpl/headline/biz/5524028.html ]
...
 Any system builders here care to give a guess as to how long it will be
 before 
 a robot, with your system as its controller, can walk into the average 
 suburban home, find the kitchen, make coffee, and serve it?

Nope, that's the wrong definition of AI.  AI doesn't mean human form or human
capabilities.  Fifty years ago we had maids, typists, and gas station
attendants.  The technical solution was not to build robotic versions, but to
eliminate the need for them.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread J Storrs Hall, PhD
On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:
 J Storrs Hall, PhD wrote:
  Any system builders here care to give a guess as to how long it will be 
before 
  a robot, with your system as its controller, can walk into the average 
  suburban home, find the kitchen, make coffee, and serve it?
 
 Eight years.
 
 My system, however, will go one better:  it will be able to make a pot 
 of the finest Broken Orange Pekoe and serve it.

In the average suburban home? (No fair having the robot bring its own teabags, 
(or would it be loose tea and strainer?)  or having a coffee machine built 
in, for that matter). It has to live off the land...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

[ http://www.chron.com/disp/story.mpl/headline/biz/5524028.html ]

Steve Wozniak has given up on artificial intelligence.
What is intelligence? Apple's co-founder asked an audience of about 550 
Thursday at the Houston area's first Up Experience conference in Stafford.

His answer? A robot that could get him a cup of coffee.
You can come into my house and make a cup of coffee and I can go into your 
house and make a cup of coffee, he said. Imagine what it would take for a 
robot to do that.
It would have to negotiate the home, identify the coffee machine and know how 
it works, he noted.
But that is not something a machine is capable of learning — at least not in 
his lifetime, added Wozniak, who rolled onto the stage on his ever-present 
Segway before delivering a rapid-fire speech on robotics, his vision of 
robots in classrooms and the long haul ahead for artificial intelligence.


...

Any system builders here care to give a guess as to how long it will be before 
a robot, with your system as its controller, can walk into the average 
suburban home, find the kitchen, make coffee, and serve it?


Eight years.

My system, however, will go one better:  it will be able to make a pot 
of the finest Broken Orange Pekoe and serve it.




Richard Loosemore




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549