RE: [agi] SOTA

2007-01-14 Thread Gary Miller
No, and it's a damn good thing it isn't. If it was we would be sentencing
it to a mindless job with no time off, only to be disposed of when a better
model
comes out.
 
We only want our AI's to be a smart as necessary to accomplish their jobs
just as 
our cells and organs are.
 
Limited conciousness or self-reflectivity may only be necessary in highly
complex systems 
like computers where we may want them to recognize that they have a virus
and take
steps like searching for a digital vaccine to eliminate it without the owner
even knowing 
it was there.
 
Even in these cases we are only giving the system conciousness over one
specific aspect 
of it's being.
 
I would say that until we have software that can learn new free format
information as we do and 
modify it's goal stack based upon that new information then we do not have a
truly concious
computer.

  _  

From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 12, 2007 9:45 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SOTA



Ah, but is a thermostat conscious ?

:-)






On 12/01/07, [EMAIL PROTECTED]  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]  wrote: 

http://www.thermostatshop.com/
 
Not sure what you've been Googling on but here they are.
 
There's even one you can call on the telephone


 If there's a market for this, then why can't I even buy a thermostat 
 with a timer on it to turn the temperature down at night and up in the 
 morning? The most basic home automation, which could have been built 
 cheaply 30 years ago, is still, if available at all, so rare that I've 
 never seen it. 
 


  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303 


  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-14 Thread Bob Mottram

Well, there's no reason to stop at a purely utilitarian level.  A high level
of consciousness may be necessary for performing certain kinds of task, such
as imagining someone's reaction to a particular event.



On 14/01/07, Gary Miller [EMAIL PROTECTED] wrote:


 No, and it's a damn good thing it isn't. If it was we would be sentencing
it to a mindless job with no time off, only to be disposed of when a
better model
comes out.

We only want our AI's to be a smart as necessary to accomplish their jobs
just as
our cells and organs are.

Limited conciousness or self-reflectivity may only be necessary in highly
complex systems
like computers where we may want them to recognize that they have a virus
and take
steps like searching for a digital vaccine to eliminate it without the
owner even knowing
it was there.

Even in these cases we are only giving the system conciousness over one
specific aspect
of it's being.

I would say that until we have software that can learn new free format
information as we do and
modify it's goal stack based upon that new information then we do not have
a truly concious
computer.

 --
*From:* Bob Mottram [mailto:[EMAIL PROTECTED]
*Sent:* Friday, January 12, 2007 9:45 AM
*To:* agi@v2.listbox.com
*Subject:* Re: [agi] SOTA


Ah, but is a thermostat conscious ?

:-)





On 12/01/07, [EMAIL PROTECTED]  [EMAIL PROTECTED] wrote:

  http://www.thermostatshop.com/

 Not sure what you've been Googling on but here they are.

 There's even one you can call on the telephone

  If there's a market for this, then why can't I even buy a thermostat
  with a timer on it to turn the temperature down at night and up in the

  morning? The most basic home automation, which could have been built
  cheaply 30 years ago, is still, if available at all, so rare that I've

  never seen it.
 

 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-12 Thread Bob Mottram

Ah, but is a thermostat conscious ?

:-)





On 12/01/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


http://www.thermostatshop.com/

Not sure what you've been Googling on but here they are.

There's even one you can call on the telephone

 If there's a market for this, then why can't I even buy a thermostat
 with a timer on it to turn the temperature down at night and up in the
 morning? The most basic home automation, which could have been built
 cheaply 30 years ago, is still, if available at all, so rare that I've
 never seen it.


--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-12 Thread Philip Goetz

On 1/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


http://www.thermostatshop.com/

Not sure what you've been Googling on but here they are.


Haven't been googling.  But the fact is that I've never actually
/seen/ one in the wild.  My point is that the market demand for such
simple and useful and cheap items is low enough that I've never
actually seen one.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-12 Thread Eliezer S. Yudkowsky

Philip Goetz wrote:


Haven't been googling.  But the fact is that I've never actually
/seen/ one in the wild.  My point is that the market demand for such
simple and useful and cheap items is low enough that I've never
actually seen one.


Check any hardware store, there's a whole shelf.  I bought one for my 
last apartment.  I see them all over the place.  They're really not rare.


Moral: in AI, the state of the art is often advanced far beyond what 
people think it is.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-12 Thread Matt Mahoney

--- Bob Mottram [EMAIL PROTECTED] wrote:

 Ah, but is a thermostat conscious ?
 
 :-)

Are humans conscious?  It depends on your definition of consciousness, which
is really hard to define. 

Does a thermostat want to keep the room at a constant temperature?  Or does it
just behave as if that is what it wants?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-12 Thread justin corwin

On 1/12/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

Philip Goetz wrote:

 Haven't been googling.  But the fact is that I've never actually
 /seen/ one in the wild.  My point is that the market demand for such
 simple and useful and cheap items is low enough that I've never
 actually seen one.


The term for this type of thermostat is a 'set-back' thermostat, and
they were originally designed to save energy and heating/cooling bills
by having programmable periods. They have become increasingly complex.
All my most recent houses had essentially little calendar computers in
them.

They are extremely common in new construction, as this link shows:

http://www.marketresearch.com/product/display.asp?productid=1354170g=1

STUDY HIGHLIGHTS
- Honeywell/Magicstat was the top brand of thermostat bought in 2003.
- Two-thirds of the thermostats purchased in 2003 were set-back models.
- The average price paid for the electronic set-back thermostats was $70.
- Thermostats were purchased mostly from builders/contractors and home
centers. 


Check any hardware store, there's a whole shelf.  I bought one for my
last apartment.  I see them all over the place.  They're really not rare.

Moral: in AI, the state of the art is often advanced far beyond what
people think it is.


There are really two things being talked about here. One is SOTA,
which almost by definition, is beyond what people think it is, and the
other is market availability, or practical availability, which is very
different than SOTA technology.

SOTA AI technology is essentially that which you, knowing the latest
theories, build yourself. There is no such thing as a SOTA AI system,
the way there are SOTA stereo systems, or SOTA crypto systems
available, because the market availability of the technology does not
have the same characteristics.


--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-11 Thread Philip Goetz

On 06/01/07, Gary Miller [EMAIL PROTECTED] wrote:
 I like the idea of the house being the central AI though and communicating
 to
 house robots through an wireless encrypted protocol to prevent inadvertant
 commands from other systems and hacking.


This is the way it's going to go in my opinion.  In a house or office the
robots would really be dumb actuators - puppets - being controlled from a
central AI which integrates multiple systems together.  That way you can
keep the cost and maintenance requirements of the robot to a bare minimum.
Such a system also future-proofs the robot in a rapidly changing software
world, and allows intelligence to be provided as an internet based service.


If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning?  The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-11 Thread Mark Waser

If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning?  The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.


Huh?  I've never lived in a home without such (nor been aware that they were 
rare).


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, January 11, 2007 1:29 PM
Subject: Re: [agi] SOTA



On 06/01/07, Gary Miller [EMAIL PROTECTED] wrote:
 I like the idea of the house being the central AI though and 
 communicating

 to
 house robots through an wireless encrypted protocol to prevent 
 inadvertant

 commands from other systems and hacking.


This is the way it's going to go in my opinion.  In a house or office the
robots would really be dumb actuators - puppets - being controlled from a
central AI which integrates multiple systems together.  That way you can
keep the cost and maintenance requirements of the robot to a bare 
minimum.

Such a system also future-proofs the robot in a rapidly changing software
world, and allows intelligence to be provided as an internet based 
service.


If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning?  The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-11 Thread Randall Randall

On Jan 11, 2007, at 1:29 PM, Philip Goetz wrote:

On 06/01/07, Gary Miller [EMAIL PROTECTED] wrote:
This is the way it's going to go in my opinion.  In a house or  
office the
robots would really be dumb actuators - puppets - being controlled  
from a
central AI which integrates multiple systems together.  That way  
you can
keep the cost and maintenance requirements of the robot to a bare  
minimum.
Such a system also future-proofs the robot in a rapidly changing  
software
world, and allows intelligence to be provided as an internet based  
service.


If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning?  The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.


http://www.google.com/search?q=programmable+thermostat

They're extremely common; there's an entire aisle of such things at my
local Home Depot.

--
Randall Randall [EMAIL PROTECTED]
If you are trying to produce a commercial product in
 a timely and cost efficient way, it is not good to have
 somebody's PhD research on your critical path. -- Chip Morningstar


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-11 Thread J. Andrew Rogers


On Jan 11, 2007, at 10:29 AM, Philip Goetz wrote:

If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning?  The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.



What country do you live in?  That thermostat technology has been  
ubiquitous in the US for new construction for many years.  Granted,  
older buildings tend to have the thermostats opportunistically  
replaced, but it has been a lot of years and a couple places since I  
didn't live in a place with this type of thermostat.


Apparently, the future has arrived and just is not evenly distributed.


J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Bob Mottram

On 06/01/07, Philip Goetz [EMAIL PROTECTED] wrote:


I worked for a robotics company called Arctec in the early 1980s.
We built a robot called the Gemini.  They essentially solved the
navigation problem - in an office-space world.  You stuck one small
reflector on both sides of every door, at intersections, and on its
recharging station, and if the world consisted of hallways with doors
leading off the hallways, it went where you told it to go, and went
back to its docking station and recharged when it had to.  This was
using a 1MHz 65C02.  The navigational code took 12K of RAM.  It relied
largely on having highly-accurate narrow-beam sonar sensors all around
its body, and on the reflectors.




Reflectors have been used on AGVs for quite some time.  However, even using
reflectors the robot has no real idea of what its environment looks like.
Most of the time it's flying blind, guessing its way between reflectors,
like a moth navigating by the light of the moon.  In my opinion to make real
progress the machine needs to be able to see the three dimensional structure
of its environment at least as well as you or I can.  Only then can it make
more intelligent decisions about what to do.



The problem wasn't technological.  It was that nobody had any use for

a robot.  We never figured out what people would want the robot for.
I think that's still the problem.




This is something which I've also considered.  When you look at a dumb robot
like a Roomba, the technology to build this fairly economically has existed
for something like the last 25 years.  So why didn't these devices appear
much earlier?  I think for a robot product to be successfull not only does
the technology need to be there, but also the cultural attitude.

Even now robot startup companies such as White Box Robotics seem to have
little idea of what their machines might actually be used for.  The
application of last resort is always security, but this is a very poor use
for a robot in my opinion.  Security is better and more economically done
with a dissembodied intelligence using fixed cameras, a la 2001.

- Bob

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Benjamin Goertzel


The problem wasn't technological.  It was that nobody had any use for
a robot.  We never figured out what people would want the robot for.
I think that's still the problem.



Phil, I think the real issue is that no one wants an expensive,
stupid, awkward robot...

A broadly functional household robot would be very useful, even if it
lacked intelligence beyond the human-moron level...

For instance, right now, I would like a robot to go into my daughter's
room and clean up the rabbit turds that are in the rabbit playpen in
there.  I would rather not do it.  But, a Roomba can't handle this
task because it can't climb over the walls of the playpen, nor
understand my instructions, nor pick up the turds but leave the legos
on the floor alone...

Heck, a robot to let the dogs in and out of the house would be nice
too... being doggie doorman gets tiring.  Of course, this could be
solved more easily by installing a doggie door ;-)

How about a robot to bring me the cordless phone when it rings, but
has been left somewhere else in the house ... ?  ;-)

How about one to put the dishes in the dishwasher and unload them ...
and re-insert the ones that didn't get totally cleaned?  The
dishwasher is a good invention but it only does half the job

The problem **is** technological: it's that current robots really suck
... not that non-sucky robots would be useles...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Pei Wang

Stanford scientists plan to make a robot capable of performing
everyday tasks, such as unloading the dishwasher.

http://news-service.stanford.edu/news/2006/november8/ng-110806.html

On 1/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


 The problem wasn't technological.  It was that nobody had any use for
 a robot.  We never figured out what people would want the robot for.
 I think that's still the problem.


Phil, I think the real issue is that no one wants an expensive,
stupid, awkward robot...

A broadly functional household robot would be very useful, even if it
lacked intelligence beyond the human-moron level...

For instance, right now, I would like a robot to go into my daughter's
room and clean up the rabbit turds that are in the rabbit playpen in
there.  I would rather not do it.  But, a Roomba can't handle this
task because it can't climb over the walls of the playpen, nor
understand my instructions, nor pick up the turds but leave the legos
on the floor alone...

Heck, a robot to let the dogs in and out of the house would be nice
too... being doggie doorman gets tiring.  Of course, this could be
solved more easily by installing a doggie door ;-)

How about a robot to bring me the cordless phone when it rings, but
has been left somewhere else in the house ... ?  ;-)

How about one to put the dishes in the dishwasher and unload them ...
and re-insert the ones that didn't get totally cleaned?  The
dishwasher is a good invention but it only does half the job

The problem **is** technological: it's that current robots really suck
... not that non-sucky robots would be useles...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Benjamin Goertzel

Needless to say, I don't consider cleaning up the house a particularly
interesting goal for AGI projects.  I can well imagine it being done
by a narrow AI system with no capability to do anything besides
manipulate simple objects, navigate, etc.

Being able to understand natural language commands pertaining to
cleaning up the house is a whole other kettle of fish, of course.
This, as opposed to the actual house-cleaning, appears to be an
AGI-hard problem...

-- BenG

On 1/6/07, Pei Wang [EMAIL PROTECTED] wrote:

Stanford scientists plan to make a robot capable of performing
everyday tasks, such as unloading the dishwasher.

http://news-service.stanford.edu/news/2006/november8/ng-110806.html

On 1/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 
  The problem wasn't technological.  It was that nobody had any use for
  a robot.  We never figured out what people would want the robot for.
  I think that's still the problem.
 

 Phil, I think the real issue is that no one wants an expensive,
 stupid, awkward robot...

 A broadly functional household robot would be very useful, even if it
 lacked intelligence beyond the human-moron level...

 For instance, right now, I would like a robot to go into my daughter's
 room and clean up the rabbit turds that are in the rabbit playpen in
 there.  I would rather not do it.  But, a Roomba can't handle this
 task because it can't climb over the walls of the playpen, nor
 understand my instructions, nor pick up the turds but leave the legos
 on the floor alone...

 Heck, a robot to let the dogs in and out of the house would be nice
 too... being doggie doorman gets tiring.  Of course, this could be
 solved more easily by installing a doggie door ;-)

 How about a robot to bring me the cordless phone when it rings, but
 has been left somewhere else in the house ... ?  ;-)

 How about one to put the dishes in the dishwasher and unload them ...
 and re-insert the ones that didn't get totally cleaned?  The
 dishwasher is a good invention but it only does half the job

 The problem **is** technological: it's that current robots really suck
 ... not that non-sucky robots would be useles...

 -- Ben

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Mike Dougherty

On 1/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Needless to say, I don't consider cleaning up the house a particularly
interesting goal for AGI projects.  I can well imagine it being done
by a narrow AI system with no capability to do anything besides
manipulate simple objects, navigate, etc.

Being able to understand natural language commands pertaining to
cleaning up the house is a whole other kettle of fish, of course.
This, as opposed to the actual house-cleaning, appears to be an
AGI-hard problem...



But if the AGI were built, wouldn't it be the intelligence to pretty much
the entire world of human-moron level housecleaning robots?  All they'd need
is wifi to get instructions from the main brain.

But then a real AGI would likely become the main brain for just about ever
process control program we use, so the term quickly changes from human-moron
level to human-level moron.  :)

I really want to see a central traffic computer take driving away from all
the unqualified (or disinterested) drivers on the roads.  I'd really like to
see companies get incentives to allow knowledge workers work from home
offices to save commute time and fuel resources, but until that happens
(yeah, the employer wants to give up their sense of control?) it would be
nice to reclaim that time by allowing me to focus on what *I* want rather
than on driving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Philip Goetz

On 1/6/07, Bob Mottram [EMAIL PROTECTED] wrote:


Reflectors have been used on AGVs for quite some time.  However, even using
reflectors the robot has no real idea of what its environment looks like.
Most of the time it's flying blind, guessing its way between reflectors,
like a moth navigating by the light of the moon.  In my opinion to make real
progress the machine needs to be able to see the three dimensional structure
of its environment at least as well as you or I can.  Only then can it make
more intelligent decisions about what to do.


The robot navigated successfully around a home or office, avoiding
obstacles, at walking speed.  What more do you need?

I think it performed so much better than the robots developed in
academia partly because we didn't have enough processing power to use
a camera, so we weren't sucked into that black hole (at the time) of
trying to process vision.


Even now robot startup companies such as White Box Robotics seem to have
little idea of what their machines might actually be used for.  The
application of last resort is always security, but this is a very poor use
for a robot in my opinion.  Security is better and more economically done
with a dissembodied intelligence using fixed cameras, a la 2001.


We had a customer who wanted us to attach a gun to the robot.
Probably not a programmer. :P

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] SOTA

2007-01-06 Thread Gary Miller
Ben Said:
 Being able to understand natural language commands pertaining 
 to cleaning up the house is a whole other kettle of fish, of 
 course. This, as opposed to the actual house-cleaning, appears 
 to be an AGI-hard problem...

A full Turing complete Natural Language system would not be necessary 
for robotic control.  

A pattern such as {clean|sweep|vacuum} (the )[RoomName]room( {for|in}
[Number] minutes)

When coupled with a voice recogniton system such as Nuance is marketing
would
increase the usefulness and interactivity of the robot immensely.

[RoomName] and [Number] become variables passed to the robot vacuum and can
have defaults
If omitted from the command.

The robot could come with a set of canned patterns for starters and the
patterns could
by customized by the user and associated with new behaviours.

I like the idea of the house being the central AI though and communicating
to 
house robots through an wireless encrypted protocol to prevent inadvertant 
commands from other systems and hacking.

If the robots were made to a standard such as Microsoft's robotics toolkit 
then a single control and monitoring system could coordinate multiple robots
activities and prevent collisions, coordinate efforts, etc...  You'd
probably need 
a backup fault tolerent system to prevent loss of critical systems like
security,
fire reporting, and temperature control.  

The house AI would be interfaced with the telephone and internet so that you
could
enter remote commands if you think of something while you're away.

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Saturday, January 06, 2007 11:16 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SOTA

Needless to say, I don't consider cleaning up the house a particularly
interesting goal for AGI projects.  I can well imagine it being done by a
narrow AI system with no capability to do anything besides manipulate simple
objects, navigate, etc.

Being able to understand natural language commands pertaining to cleaning up
the house is a whole other kettle of fish, of course.
This, as opposed to the actual house-cleaning, appears to be an AGI-hard
problem...

-- BenG

On 1/6/07, Pei Wang [EMAIL PROTECTED] wrote:
 Stanford scientists plan to make a robot capable of performing 
 everyday tasks, such as unloading the dishwasher.

 http://news-service.stanford.edu/news/2006/november8/ng-110806.html

 On 1/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
  
   The problem wasn't technological.  It was that nobody had any use 
   for a robot.  We never figured out what people would want the robot
for.
   I think that's still the problem.
  
 
  Phil, I think the real issue is that no one wants an expensive, 
  stupid, awkward robot...
 
  A broadly functional household robot would be very useful, even if 
  it lacked intelligence beyond the human-moron level...
 
  For instance, right now, I would like a robot to go into my 
  daughter's room and clean up the rabbit turds that are in the rabbit 
  playpen in there.  I would rather not do it.  But, a Roomba can't 
  handle this task because it can't climb over the walls of the 
  playpen, nor understand my instructions, nor pick up the turds but 
  leave the legos on the floor alone...
 
  Heck, a robot to let the dogs in and out of the house would be nice 
  too... being doggie doorman gets tiring.  Of course, this could be 
  solved more easily by installing a doggie door ;-)
 
  How about a robot to bring me the cordless phone when it rings, but 
  has been left somewhere else in the house ... ?  ;-)
 
  How about one to put the dishes in the dishwasher and unload them ...
  and re-insert the ones that didn't get totally cleaned?  The 
  dishwasher is a good invention but it only does half the job
 
  The problem **is** technological: it's that current robots really 
  suck ... not that non-sucky robots would be useles...
 
  -- Ben
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email To 
  unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email To 
 unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Bob Mottram

On 06/01/07, Mike Dougherty [EMAIL PROTECTED] wrote:


I really want to see a central traffic computer take driving away from all
the unqualified (or disinterested) drivers on the roads.  I'd really like to
see companies get incentives to allow knowledge workers work from home
offices to save commute time and fuel resources, but until that happens
(yeah, the employer wants to give up their sense of control?) it would be
nice to reclaim that time by allowing me to focus on what *I* want rather
than on driving.




I agree on the driving thing.  One day people will laugh about the times
when you once had to get behind a steering wheel and wrestle with it, and
how human driving was so unreliable that people had to be insured against
the results of their own incompetence.  The most optimistic pundits believe
that it will be ten years from Stanley to the highway.  There could be
numerous non-technical hurdles (both legal and insurance related) which
might delay progress further.

I expect to see the first autonomous road system in some relatively quiet
area of the world, where there are few (or no) traffic regulations, and no
insurance companies to complain.  Maybe some country in Africa or a province
of China will be the first to go autonomous.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread Bob Mottram

On 06/01/07, Gary Miller [EMAIL PROTECTED] wrote:


I like the idea of the house being the central AI though and communicating
to
house robots through an wireless encrypted protocol to prevent inadvertant
commands from other systems and hacking.




This is the way it's going to go in my opinion.  In a house or office the
robots would really be dumb actuators - puppets - being controlled from a
central AI which integrates multiple systems together.  That way you can
keep the cost and maintenance requirements of the robot to a bare minimum.
Such a system also future-proofs the robot in a rapidly changing software
world, and allows intelligence to be provided as an internet based service.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-06 Thread BillK

On 1/6/07, Bob Mottram wrote:

This is the way it's going to go in my opinion.  In a house or office the
robots would really be dumb actuators - puppets - being controlled from a
central AI which integrates multiple systems together.  That way you can
keep the cost and maintenance requirements of the robot to a bare minimum.
Such a system also future-proofs the robot in a rapidly changing software
world, and allows intelligence to be provided as an internet based service.



http://www.pinktentacle.com/2006/12/top-10-robots-selected-for-robot-award-2006/

Robotic building cleaning system (Fuji Heavy Industries/ Sumitomo)

- This autonomous robot roams the hallways of buildings, performing
cleaning operations along the way. Capable of controlling elevators,
the robot can move from floor to floor unsupervised, and it returns to
its start location once it has finished cleaning. The robot is
currently employed as a janitor at 10 high-rise buildings in Japan,
including Harumi Triton Square and Roppongi Hills.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-05 Thread Philip Goetz

On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:


Grinding my own axe, I also think that stereo vision systems will bring
significant improvements to robotics over the next few years.  Being able to
build videogame-like 3D models of the environment in real time is now a
feasible proposition which I think will happen before the decade is out.
With a good model of the environment the robot can rehearse possible
scenarios before actually running them, and find important features such as
desk or table surfaces.


I worked for a robotics company called Arctec in the early 1980s.
We built a robot called the Gemini.  They essentially solved the
navigation problem - in an office-space world.  You stuck one small
reflector on both sides of every door, at intersections, and on its
recharging station, and if the world consisted of hallways with doors
leading off the hallways, it went where you told it to go, and went
back to its docking station and recharged when it had to.  This was
using a 1MHz 65C02.  The navigational code took 12K of RAM.  It relied
largely on having highly-accurate narrow-beam sonar sensors all around
its body, and on the reflectors.

The problem wasn't technological.  It was that nobody had any use for
a robot.  We never figured out what people would want the robot for.
I think that's still the problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2007-01-05 Thread Olie Lamb

On 1/6/07, Philip Goetz [EMAIL PROTECTED] wrote:



The problem wasn't technological.  It was that nobody had any use for
a robot.  We never figured out what people would want the robot for.
I think that's still the problem.




Well, I for one want a job assistant who can fetch things - what apprentices
or surgical nurse-assistanty things are often called to do.

Assistant: Please get me a Phillips head screwdriver and half-a-dozen 10mm
screws

A robot that could

1) Voice recognise instructions
2) Understand simple commands like Get me X, Hold this still, Return
this...
3) Manoeuvre from your work space to your tool-store
4) Grab items from an appropriately set-up tool-store
etc

Would be pretty damn useful, and I see most of this as being feasible with
current day tech.  Sure, such an assistant would be pretty damn expensive,
and less useful than a high-school-dropout apprentice/assistant (who can
also run down the street and get you a sandwich), but this is a real,
possible application for a robot.

-- Olie

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] SOTA

2006-11-11 Thread Bob Mottram
Another recent development is the CMU telepresence robot, which is quite low cost and would be a good place to start. Since it uses a linux based PC there should be plenty of scope for programming more sophisticated applications than Lego would be able to handle.
 http://www.terk.ri.cmu.edu/recipes/index.phpAlthough its intended to be used for education I'm sure a ruggedised version of this could have industrial or home uses. Al the software at present is open source. 
For a more commercial system intelligence would be supplied to the robot via a web based subscription service, and could be purely human, purely AI, or a mixture of the two. Once you have people driving robots around via the internet you can bring your data mining systems to bear and start to automate some of that human intelligence.
On 24/10/06, Pei Wang [EMAIL PROTECTED] wrote:
Bob and Neil,Thanks for the informative discussion!Several questions for you and others who are familiar with robotics:For people whose interests are mainly in the connection betweensensorimotor and high-level cognition, what kind of API can be
expected in a representative robot? Something like Tekkotsu?Any comments on Microsoft Robotics Studio?Recently some people are talking about cognitive robotics, though Ihaven't found any major new idea beyond what robotics has been
covering, except the suggestion that high-level cognition should betaken into consideration. Am I missing something important?If I want to start to try some low-budget programmable robot (say, inthe price range of Robosapien V2 and LEGO Mindstorms NXT), which one
will you recommend? I won't have high expectation in performance, butwill be interested in testing ideas on the coordination of perception,reasoning, learning, and action.PeiOn 10/24/06, Bob Mottram 
[EMAIL PROTECTED] wrote: On 23/10/06, Neil H. [EMAIL PROTECTED] wrote:  I'm also pretty surprise that they haven't done anything major with
  their vSLAM tech:  http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091 Evolution really failed to capitalise upon their early success.One of
 their biggest mistakes was to make their API prohibiively expensive, so that few people have ever used it.There was a command line API supplied by default, but that was dreadful and had major limitations which users of the
 robots have long complained about. The vSLAM technology is a monocular SLAM method which works to an extent, but when it fails it fails catastrophically.They did experiment with stereo SLAM last year (there is a paper somewhere online about that).
 Stereo gives much better accuracy and doesn't require the robot to travel for at least one metre before it can localise, but there are some fundamental issues with using things like SIFT features for doing stereo
 which they probably didn't realise.   I think their stuff was also licenced to Sony for use on their   AIBO, before Sony axed their robotics products. 
  Sony licensed the tech, but I think they only used it so that AIBO  could visually recognize pre-printed patterns on cards, which would  signal the AIBO to dance, return to the charging station, etc. SIFT is
  IMHO overkill for that kind of thing, and it's a pity they didn't do  anything more interesting with it. It's a shame they ditched AIBO and their other robots in development.AIBO
 users were rather unhappy about that.Perhaps some other company will buy the rights.  Perhaps. To play devil's advocate, how well do you think stereo vision  system would actually work for creating a 3D structure of a home
  environment? It seems that distinctive features in the home tend to be  few and far between. Of course, the regions between distinctive  features tend to be planar surfaces, so perhaps it isn't too bad.
 Well this is exactly what I'm (unofficially) working on now.From the results I have at the moment I can say with confidence that it will be possible to navigate a robot around a home environment using a pair of
 stereo cameras, with the robot remaining within at least a 7cm position tollerance.7cm is just a raw localisation figure, and after kalman filtering and sensor fusion with odometry the accuracy should be much better
 than that.You might think that there are not many features on walls, but even in environments which people consider to be blank there are often small imperfections or shading gradients which stereo algorithms can pick
 up.In real life few surfaces are perfectly uniform. With good localisation performance high quality mapping becomes possible.I can run the stereo algorithms at various levels of detail, and use
 traditional occupancy grid methods (with a few tweaks) to build up evidence in a probablistic fashion.The idea at the moment is to have the localisation algorithms running in real time using low-res grids, and to
 build a separate high quality model of the environment in a high resolution grid more gradually in a low priority background task.Once you have a good quality grid model its then quite 

Re: [agi] SOTA

2006-10-24 Thread Bob Mottram
On 23/10/06, Neil H. [EMAIL PROTECTED] wrote:
I'm also pretty surprise that they haven't done anything major withtheir vSLAM tech:http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091
Evolution really failed to capitalise upon their early success. One of their biggest mistakes was to make their API prohibiively expensive, so that few people have ever used it. There was a command line API supplied by default, but that was dreadful and had major limitations which users of the robots have long complained about.
The vSLAM technology is a monocular SLAM method which works to an extent, but when it fails it fails catastrophically. They did experiment with stereo SLAM last year (there is a paper somewhere online about that). Stereo gives much better accuracy and doesn't require the robot to travel for at least one metre before it can localise, but there are some fundamental issues with using things like SIFT features for doing stereo which they probably didn't realise.
 I think their stuff was also licenced to Sony for use on their AIBO, before Sony axed their robotics products.
Sony licensed the tech, but I think they only used it so that AIBOcould visually recognize pre-printed patterns on cards, which wouldsignal the AIBO to dance, return to the charging station, etc. SIFT is
IMHO overkill for that kind of thing, and it's a pity they didn't doanything more interesting with it.It's a shame they ditched AIBO and their other robots in development. AIBO users were rather unhappy about that. Perhaps some other company will buy the rights.
Perhaps. To play devil's advocate, how well do you think stereo vision
system would actually work for creating a 3D structure of a homeenvironment? It seems that distinctive features in the home tend to befew and far between. Of course, the regions between distinctivefeatures tend to be planar surfaces, so perhaps it isn't too bad.
Well this is exactly what I'm (unofficially) working on now. From the results I have at the moment I can say with confidence that it will be possible to navigate a robot around a home environment using a pair of stereo cameras, with the robot remaining within at least a 7cm position tollerance. 7cm is just a raw localisation figure, and after kalman filtering and sensor fusion with odometry the accuracy should be much better than that. You might think that there are not many features on walls, but even in environments which people consider to be blank there are often small imperfections or shading gradients which stereo algorithms can pick up. In real life few surfaces are perfectly uniform.
With good localisation performance high quality mapping becomes possible. I can run the stereo algorithms at various levels of detail, and use traditional occupancy grid methods (with a few tweaks) to build up evidence in a probablistic fashion. The idea at the moment is to have the localisation algorithms running in real time using low-res grids, and to build a separate high quality model of the environment in a high resolution grid more gradually in a low priority background task. Once you have a good quality grid model its then quite straightforward to detect things like walls and furniture, and to simplify the data down to something which is a more efficient representation similar to something you might find in a game or an AGI sim. You can also use the grid model in exactly the same way that 2D background subtraction systems work (except in 3D) in order to detect changes within the environment.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-24 Thread BillK

On 10/20/06, Richard Loosemore wrote:

I would *love* to see those IBM folks put a couple of jabbering
four-year-old children in front of that translation system, to see how
it likes their 'low-intelligence' language. :-)  Does anyone have any
contacts on the team, so we could ask?



I sent an email to Liang Gu on the IBM MASTOR project team (not really
expecting a reply) :)  and have just received this response. Sounds
hopeful.

BillK
-

Bill,

Thanks for your interests on MASTOR. And your suggestion of MASTOR for
Children is really great! It is definitely much more meaningful if
MASTOR can not only help adults but also children communicate with
each other around the world using different languages!
Although recognizing Children's voice has been proved a very
challenging task, the translation and text-to-speech techniques thus
involved should be very similar to what we have now. We will seriously
investigate the possibility of this approach and will send you a test
link if we later developed a pilot system on the web.

Regards and thanks again for your enthusiasm about MASTOR,
Liang

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-24 Thread Neil H.

On 10/24/06, Bob Mottram [EMAIL PROTECTED] wrote:


On 23/10/06, Neil H. [EMAIL PROTECTED] wrote:
  I think their stuff was also licenced to Sony for use on their
  AIBO, before Sony axed their robotics products.

 Sony licensed the tech, but I think they only used it so that AIBO
 could visually recognize pre-printed patterns on cards, which would
 signal the AIBO to dance, return to the charging station, etc. SIFT is
 IMHO overkill for that kind of thing, and it's a pity they didn't do
 anything more interesting with it.


It's a shame they ditched AIBO and their other robots in development.  AIBO
users were rather unhappy about that.  Perhaps some other company will buy
the rights.


Not just end-users, but there were also a number of research labs
which used AIBOs as a robotics platform. (I was in such a lab as an
undergrad) They were pretty nice for running software on, and it was
more-or-less impossible to get a robot with similar capabilities at
that $1000 price range.

Somewhat surprisingly, the RoboCup four-legged league is still active,
with 24 teams qualifying to compete this year:
http://www.robocup2006.org/sixcms/detail.php?id=390lang=en

I wonder how they keep their AIBOs operational over the years...


 Perhaps. To play devil's advocate, how well do you think stereo vision
 system would actually work for creating a 3D structure of a home
 environment? It seems that distinctive features in the home tend to be
 few and far between. Of course, the regions between distinctive
 features tend to be planar surfaces, so perhaps it isn't too bad.


Well this is exactly what I'm (unofficially) working on now.  From the
results I have at the moment I can say with confidence that it will be
possible to navigate a robot around a home environment using a pair of
stereo cameras, with the robot remaining within at least a 7cm position
tollerance.  7cm is just a raw localisation figure, and after kalman
filtering and sensor fusion with odometry the accuracy should be much better
than that.  You might think that there are not many features on walls, but
even in environments which people consider to be blank there are often
small imperfections or shading gradients which stereo algorithms can pick
up.  In real life few surfaces are perfectly uniform.

With good localisation performance high quality mapping becomes possible.  I
can run the stereo algorithms at various levels of detail, and use
traditional occupancy grid methods (with a few tweaks) to build up evidence
in a probablistic fashion.  The idea at the moment is to have the
localisation algorithms running in real time using low-res grids, and to
build a separate high quality model of the environment in a high resolution
grid more gradually in a low priority background task.  Once you have a good
quality grid model its then quite straightforward to detect things like
walls and furniture, and to simplify the data down to something which is a
more efficient representation similar to something you might find in a game
or an AGI sim.  You can also use the grid model in exactly the same way that
2D background subtraction systems work (except in 3D) in order to detect
changes within the environment.


This a pretty interesting approach. I'd love to see more details on
this in the future.

-- Neil

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-24 Thread Pei Wang

Bob and Neil,

Thanks for the informative discussion!

Several questions for you and others who are familiar with robotics:

For people whose interests are mainly in the connection between
sensorimotor and high-level cognition, what kind of API can be
expected in a representative robot? Something like Tekkotsu?

Any comments on Microsoft Robotics Studio?

Recently some people are talking about cognitive robotics, though I
haven't found any major new idea beyond what robotics has been
covering, except the suggestion that high-level cognition should be
taken into consideration. Am I missing something important?

If I want to start to try some low-budget programmable robot (say, in
the price range of Robosapien V2 and LEGO Mindstorms NXT), which one
will you recommend? I won't have high expectation in performance, but
will be interested in testing ideas on the coordination of perception,
reasoning, learning, and action.

Pei

On 10/24/06, Bob Mottram [EMAIL PROTECTED] wrote:



On 23/10/06, Neil H. [EMAIL PROTECTED] wrote:
 I'm also pretty surprise that they haven't done anything major with
 their vSLAM tech:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091


Evolution really failed to capitalise upon their early success.  One of
their biggest mistakes was to make their API prohibiively expensive, so that
few people have ever used it.  There was a command line API supplied by
default, but that was dreadful and had major limitations which users of the
robots have long complained about.

The vSLAM technology is a monocular SLAM method which works to an extent,
but when it fails it fails catastrophically.  They did experiment with
stereo SLAM last year (there is a paper somewhere online about that).
Stereo gives much better accuracy and doesn't require the robot to travel
for at least one metre before it can localise, but there are some
fundamental issues with using things like SIFT features for doing stereo
which they probably didn't realise.


  I think their stuff was also licenced to Sony for use on their
  AIBO, before Sony axed their robotics products.

 Sony licensed the tech, but I think they only used it so that AIBO
 could visually recognize pre-printed patterns on cards, which would
 signal the AIBO to dance, return to the charging station, etc. SIFT is
 IMHO overkill for that kind of thing, and it's a pity they didn't do
 anything more interesting with it.


It's a shame they ditched AIBO and their other robots in development.  AIBO
users were rather unhappy about that.  Perhaps some other company will buy
the rights.


 Perhaps. To play devil's advocate, how well do you think stereo vision
 system would actually work for creating a 3D structure of a home
 environment? It seems that distinctive features in the home tend to be
 few and far between. Of course, the regions between distinctive
 features tend to be planar surfaces, so perhaps it isn't too bad.


Well this is exactly what I'm (unofficially) working on now.  From the
results I have at the moment I can say with confidence that it will be
possible to navigate a robot around a home environment using a pair of
stereo cameras, with the robot remaining within at least a 7cm position
tollerance.  7cm is just a raw localisation figure, and after kalman
filtering and sensor fusion with odometry the accuracy should be much better
than that.  You might think that there are not many features on walls, but
even in environments which people consider to be blank there are often
small imperfections or shading gradients which stereo algorithms can pick
up.  In real life few surfaces are perfectly uniform.

With good localisation performance high quality mapping becomes possible.  I
can run the stereo algorithms at various levels of detail, and use
traditional occupancy grid methods (with a few tweaks) to build up evidence
in a probablistic fashion.  The idea at the moment is to have the
localisation algorithms running in real time using low-res grids, and to
build a separate high quality model of the environment in a high resolution
grid more gradually in a low priority background task.  Once you have a good
quality grid model its then quite straightforward to detect things like
walls and furniture, and to simplify the data down to something which is a
more efficient representation similar to something you might find in a game
or an AGI sim.  You can also use the grid model in exactly the same way that
2D background subtraction systems work (except in 3D) in order to detect
changes within the environment.

 

 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-24 Thread Bob Mottram
On 24/10/06, Pei Wang [EMAIL PROTECTED] wrote:
Any comments on Microsoft Robotics Studio?The microsoft robotics studio is quite an unimpressive release. I had expected to see user friendly IDEs and drag-and-drop function block programming, but there's none of that. About the best I can say is that it's at a very early stage and isn't something which is easy to get started with.
The only redeeming feature of the microsoft robotics studio is the physics simulation. This might become of interest further down the line, but such simulations are useless unless the robot has a good perception system and is able to construct realistic models of its environment so that it may test out the consequences of possible actions before actually performing them (rather like Ben's conception of free will).
Recently some people are talking about cognitive robotics, though I
haven't found any major new idea beyond what robotics has beencovering, except the suggestion that high-level cognition should betaken into consideration. Am I missing something important?
Well in most robotics applications there is a huge gap between The World and high level cognitive concepts formed about it. That gap is called perception, and at the moment it's only very sparsely filled. We have various low resolution sensors, such as ultrasonics and laser scanners but these only usually give quite an impoverished view of what the world out there is like. One of my main aims over the next few years is to try to fill this perception gap, using vision systems to build high fidelity models. Once you have a good model you then stand a fighting chance of doing all kinds of reasoning and planning within the cognitive realm.
If I want to start to try some low-budget programmable robot (say, in
the price range of Robosapien V2 and LEGO Mindstorms NXT), which onewill you recommend? I won't have high expectation in performance, butwill be interested in testing ideas on the coordination of perception,reasoning, learning, and action.
The LEGO kit is really unbeatable for value. You can also remote control the robot using bluetooth. It doesn't have a camera, but it's not inconceivable that you could attach a wireless webcam and try some image processing on a desktop computer. The types of reasoning which you'll be able to do with a LEGO robot will be very limited, and a long way from anything which might be regarded as high level cognition, but it would be a good place to start.
The main excitement will come when PC based robots are available at a reasonable (  $1000) cost, because PCs mean that some serious processing power can be brought to bear and detailed mapping of the environment becomes possible. Whitebox Robotics claim that their aim is to get the cost of their product into this kind of price range, but at the moment it remains as merely an expensive research curiosity.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: [agi] SOTA

2006-10-24 Thread Ben Goertzel

Hi,

On 10/24/06, Pei Wang [EMAIL PROTECTED] wrote:

Hi Ben,

As you know, though I think AGISim is interesting, I'd rather directly
try the real thing. ;-)


I felt that way too once, and so (in 1996) I did directly try the real
thing.  Building a mobile robot and experimenting with it was fun, but
I quickly learned that one spends all one's time dealing with sensor
and actuator issues and never really gets to deal with cognition in
any interesting way.  Admittedly, robotic tech has advanced a lot
since then, but I think the basic point still holds.  IMO, given the
current state of mobile robot tech, to do robotics-based AI
effectively requires at least one dedicated team member fully devoted
to the robotics side

Much robotics-based AI involves first experimenting in robot
simulation software, anyway.  AGISim right now is not a robot
simulation software package, but it could be tailored into one, which
would be interesting  Maybe we'll start by making an AGISim
Roomba, using the Pyro interface to Roomba ;-)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: Re: [agi] SOTA

2006-10-24 Thread Ben Goertzel

I used to be of the opinion that doing robotics in simulation was a waste of
time.  The simulations were too perfect.  To simplistic compared to the
nitty gritty of real world environments.  Algorithms developed and optimised
for simulated environments would not translate well (or at all) into real
robotics applications operating in non trivial environments.  Ten years ago
that was true, but now I think it's possible to build simulations with
graphics and physics which are substantially more realistic, to the point
where it might be possible to take algorithms developed within simulation,
dump them onto a real robot and expect to see similar performance.  You'd
need to be careful to simulate sensor uncertainties, but it should be
possible.


Indeed ... I am interested in seeing AGISim go in this direction,
though there is still more basic work to be done on AGISim first...


Ironically, good quality simulations for robotics development will
themselves assist in the cognitive process, becoming the robots inner
theatre of the mind within which it may experiment with possible scenarios
before committing to a course of action.


Yeah, this is something we have discussed in an AGISim/Novamente
context, though we have not done it yet.  Basically, giving a
Novamente system an internal AGISim theatre in which to experiment
with various actions and scenarios, using simulated physics to
shortcut the need for extensive cognition when appropriate...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-23 Thread Neil H.

On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:


Another interesting development is the rise of the use of invariant feature
detection algorithms together with geometric hashing for some kinds of
object recognition.  The most notable successes to date have been using
David Lowe's SIFT method, which I think bears some resemblence to earlier
methods such as Moravec's interest operator.  To an extent these are just
old algorithms developed in the 1980s enjoying a new lease of life within a
more favourable computational environment.


Heh, I know what you mean. In the computer vision/recognition
literature, it almost seems like the non-deformable stuff can be split
into pre-SIFT and post-SIFT. After Lowe's SIFT paper came out in 1999,
it's kind of tricky to find a non-face visual recognition paper that
doesn't use SIFT or some derivative. More recently, representations
which take advantage of more contextual information (like Belongie's
Shape Context or Berg's geometric blur) seem rather interesting, but
I'm not sure how much they've proven themselves yet.

For those of you who haven't seen SIFT in action, there's a neat
downloadable demo available from Evolution Robotics:
http://www.evolution.com/core/ViPR/ (registration required)

It's been a few years since I've tried the demo myself, but if I
recall correctly, you just plug a webcam into your computer, and you
can use the program to learn to visually recognize different objects
you hold in front of it.

-- Neil

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-23 Thread Neil H.

On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:


It's a shame that Evolution Robotics weren't able to develop that system
further.  A logical progression would be to extend the geometric hashing to
3D and eventually 4D, although that would require a stereo camera or some
other way of measuring distances to the observed features.  Even so that
demo program of theirs remains as one of the more impressive examples of
invariant object recognition.  You can present objects at all kinds of
different rotations and scales and still have the program locate them, even
within a noisy webcam image.

I know one guy who bought an Evolution robot some years ago.  He took it to
a primary school and demonstrated how it recognised different objects.  He
said that the robot could recognise the objects and speak their names faster
than the children could.


Is a stereo camera system really necessary if you can move the camera
around to get shape-from-motion?

Last I heard Evolution was partnering up with WowWee, to load their
software onto the next generation of their RoboSapien toy:
http://www.pcmag.com/article2/0,1895,1941233,00.asp

I find a SIFT-equipped robot in the $200 range to be quite exciting.
Hopefully the new generation will be as hack-friendly as the older
generations.

-- Neil

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-23 Thread Bob Mottram
You can get depth information from single camera motion (eg Andrew Davison's MonoSLAM), but this requires an initial size calibration and continuous tracking. If the tracking is lost at any time you need to recalibrate. This makes single camera systems less practical. With a stereo camera the baseline gives the scale calibration.
My inside sources tell me that there's little or no software development going on at Evolution Robotics, and that longstanding issues and bugs remain unfixed. They did licence their stuff to WoWee, and also Whitebox Robotics, so its likely we'll see more SIFT enabled robots in the not too distant future. I think their stuff was also licenced to Sony for use on their AIBO, before Sony axed their robotics products.
Grinding my own axe, I also think that stereo vision systems will bring significant improvements to robotics over the next few years. Being able to build videogame-like 3D models of the environment in real time is now a feasible proposition which I think will happen before the decade is out. With a good model of the environment the robot can rehearse possible scenarios before actually running them, and find important features such as desk or table surfaces.
On 23/10/06, Neil H. [EMAIL PROTECTED] wrote:
Is a stereo camera system really necessary if you can move the cameraaround to get shape-from-motion?Last I heard Evolution was partnering up with WowWee, to load theirsoftware onto the next generation of their RoboSapien toy:
http://www.pcmag.com/article2/0,1895,1941233,00.aspI find a SIFT-equipped robot in the $200 range to be quite exciting.Hopefully the new generation will be as hack-friendly as the older
generations.-- Neil-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-23 Thread Neil H.

On 10/23/06, Bob Mottram [EMAIL PROTECTED] wrote:

My inside sources tell me that there's little or no software development
going on at Evolution Robotics, and that longstanding issues and bugs remain
unfixed.  They did licence their stuff to WoWee, and also Whitebox Robotics,
so its likely we'll see more SIFT enabled robots in the not too distant
future.


I hope so. I visited Evolution last year, and I remember that (what
seemed to be) their chief product was a system for automatically
identifying groceries hidden at the bottom of shopping carts. I guess
that's useful from a business perspective, but it isn't particularly
sexy.

I'm also pretty surprise that they haven't done anything major with
their vSLAM tech:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091


I think their stuff was also licenced to Sony for use on their
AIBO, before Sony axed their robotics products.


Sony licensed the tech, but I think they only used it so that AIBO
could visually recognize pre-printed patterns on cards, which would
signal the AIBO to dance, return to the charging station, etc. SIFT is
IMHO overkill for that kind of thing, and it's a pity they didn't do
anything more interesting with it.


Grinding my own axe, I also think that stereo vision systems will bring
significant improvements to robotics over the next few years.  Being able to
build videogame-like 3D models of the environment in real time is now a
feasible proposition which I think will happen before the decade is out.
With a good model of the environment the robot can rehearse possible
scenarios before actually running them, and find important features such as
desk or table surfaces.


Perhaps. To play devil's advocate, how well do you think stereo vision
system would actually work for creating a 3D structure of a home
environment? It seems that distinctive features in the home tend to be
few and far between. Of course, the regions between distinctive
features tend to be planar surfaces, so perhaps it isn't too bad.

-- Neil

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-21 Thread Andrew Babian
On Fri, 20 Oct 2006 22:15:37 -0400, Richard Loosemore wrote
 Matt Mahoney wrote:
  From: Pei Wang [EMAIL PROTECTED]
  On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
  
  It is not that we can't come up with the right algorithms.  
   It's that we don't have the
  computing power to implement them.
  
  Can you give us an example? I hope you don't mean algorithms like
  exhaustive search.
  
  For example, neural networks which perform rudamentary pattern 
  detection and control for vision, speech, language, robotics etc.  
  Most of the theory had been worked out by the 1980's, but 
  applications have been limited by CPU speed, memory, and training 
  data.  The basic building blocks were worked out much earlier.  
  There are only two types of learning in animals, classical 
  (association) and operant (reinforcement) conditioning.  
  Hebb's rule for classicical condioning proposed in 1949 is 
  the basis for most neural network learning algorithms today.  
  Models of operant conditioning date back to W. Ross Ashby's 
  1960 Design for a Brain where he used randomized weight 
  adjustments to stabilize a 4 neuron system build from vacuum 
  tubes and mechanical components.
  
  Neural algorithms are not intractable.  They run in polynomial time.  
  Neural networks can recognize arbitrarily complex patterns by adding 
  more layers and training them one at a time.  This parallels the 
  way people learn complex behavior.  We learn simple patterns first, 
  then build on them.
 
 I initially wrote a few sentences saying what was wrong with the 
 above, but I chopped it.  There is just no point.
 
 What you said above is just flat-out wrong from beginning to end.  I 
 have done research in that field, and taught postgraduate courses in 
 it, and what you are saying is completely divorced from reality.
 
 Richard Loosemore

I have simply taken maybe one and say a half (because it seems like every ai
survey class has to touch upon neural nets again) graduate classes on the
subject, and not taught or done research in the area, but I recognized that
most of that was wrong.  I at least hold out the possibility that neural nets
can be made useful with some greater theory about architectures and much
greater computing power.  I think it would be worthwhile for you to take the
time to list what you think the flaws were, if only to open the possibility
for some positive recomendations for research directions.  Even thought you
may be completely disillusioned, maybe not everyone is.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-21 Thread Pei Wang

Andrew,

I happen to have a list you asked. Last year I taught a graduate
course on NN (http://www.cis.temple.edu/~pwang/525-NC/CIS525.htm), and
afterwards wrote a paper
(http://nars.wang.googlepages.com/wang.AGI-CNN.pdf) to list its
strength and weakness, with respect to AGI.

In the paper, I only analyzed NN in its classical form, because it
simply has too many variants to be covered in a single paper. However,
even taking all the variants I know into consideration, I still cannot
agree with Matt that NN has got the right algorithms, and only needs
the required hardware to work.

Of course, I'm talking about AGI, rather than specific pattern
recognition tasks, which NN often does well indeed, and additional
computational power will surely further improve its performance there.

Pei

On 10/21/06, Andrew Babian [EMAIL PROTECTED] wrote:

On Fri, 20 Oct 2006 22:15:37 -0400, Richard Loosemore wrote
 Matt Mahoney wrote:
  From: Pei Wang [EMAIL PROTECTED]
  On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  It is not that we can't come up with the right algorithms.
   It's that we don't have the
  computing power to implement them.
 
  Can you give us an example? I hope you don't mean algorithms like
  exhaustive search.
 
  For example, neural networks which perform rudamentary pattern
  detection and control for vision, speech, language, robotics etc.
  Most of the theory had been worked out by the 1980's, but
  applications have been limited by CPU speed, memory, and training
  data.  The basic building blocks were worked out much earlier.
  There are only two types of learning in animals, classical
  (association) and operant (reinforcement) conditioning.
  Hebb's rule for classicical condioning proposed in 1949 is
  the basis for most neural network learning algorithms today.
  Models of operant conditioning date back to W. Ross Ashby's
  1960 Design for a Brain where he used randomized weight
  adjustments to stabilize a 4 neuron system build from vacuum
  tubes and mechanical components.
  
  Neural algorithms are not intractable.  They run in polynomial time.
  Neural networks can recognize arbitrarily complex patterns by adding
  more layers and training them one at a time.  This parallels the
  way people learn complex behavior.  We learn simple patterns first,
  then build on them.

 I initially wrote a few sentences saying what was wrong with the
 above, but I chopped it.  There is just no point.

 What you said above is just flat-out wrong from beginning to end.  I
 have done research in that field, and taught postgraduate courses in
 it, and what you are saying is completely divorced from reality.

 Richard Loosemore

I have simply taken maybe one and say a half (because it seems like every ai
survey class has to touch upon neural nets again) graduate classes on the
subject, and not taught or done research in the area, but I recognized that
most of that was wrong.  I at least hold out the possibility that neural nets
can be made useful with some greater theory about architectures and much
greater computing power.  I think it would be worthwhile for you to take the
time to list what you think the flaws were, if only to open the possibility
for some positive recomendations for research directions.  Even thought you
may be completely disillusioned, maybe not everyone is.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-21 Thread Matt Mahoney
With regard to the computational requirements of AI, there is a very clear 
relation showing that the quality of a language model improves by adding time 
and memory, as shown in the following table:  
http://cs.fit.edu/~mmahoney/compression/text.html

And with the size of the training set, as shown in this graph: 
http://cs.fit.edu/~mmahoney/dissertation/

Before you argue that text compression has nothing to do with AI, please read 
http://cs.fit.edu/~mmahoney/compression/rationale.html

I recognize that language modeling is just one small aspect of AGI.  But 
compression gives us hard numbers to compare the work of over 80 researchers 
spanning decades.  The best performing systems push the hardware to its limits. 
 This, and the evolutionary arguments I gave earlier lead me to believe that 
AGI will require a lot of computing power.  Exactly how much, nobody knows.

Whether or not AGI can be accomplished most efficiently with neural networks is 
an open question.  But the one working system we know of is based on it, and we 
ought to study it.  One critical piece of missing knowledge is the density of 
synapses in the human brain.  I think this could be resolved by putting some 
brain tissue under an electron microscope, but I guess that the number is not 
important to neurobiologists.

I read Pei Wang's paper, http://nars.wang.googlepages.com/wang.AGI-CNN.pdf
Some of the shortcomings of neural networks mentioned only apply to classical 
(feedforward or symmetric) neural networks, not to asymmetric networks with 
recurrent circuits and time delay elements, as exist in the brain.  Such 
circuits allow for short term stable or oscillating states which overcome some 
shortcomings such as the inability to train on multiple goals, which could be 
accomplished by turning parts of the network on or off.  Also, it is not true 
that training has to be offline using multiple passes, as with backpropagation. 
 Human language is structured so that layers can be trained progressively 
without need to search over hidden units.  Word associations like sun-moon or 
to-from are linear.  Some of the top compressors mentioned above (paq8, 
WinRK) use online, single pass neural networks to combine models, alternating 
prediction and training.

But it is interesting that most of the remaining shortcomings are also 
shortcomings of human thought, such as the inability to insert or represent 
structured knowledge accurately.  This is evidence that our models are correct. 
 This does not mean they are the best answer.  We don't want to duplicate the 
shortcomings of humans.  We do not want to slow down our responses and insert 
errors in order to pass the Turing test (as in Turing's 1950 example).


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-21 Thread Pei Wang

On 10/21/06, Matt Mahoney [EMAIL PROTECTED] wrote:


I read Pei Wang's paper, http://nars.wang.googlepages.com/wang.AGI-CNN.pdf
Some of the shortcomings of neural networks mentioned only apply to classical
(feedforward or symmetric) neural networks, not to asymmetric networks with
recurrent circuits and time delay elements, as exist in the brain.


I made it clear that the paper is only directly about classical NN, as
defined at the beginning of the paper. I guess for each of the
shortcomings I mentioned in the paper, someone may design a NN just to
overcome it. However, I haven't known any NN that can overcome all of
the shortcomings, or even most of them, in a consistent manner.

I never mentioned the brain in the paper, because it is a completely
different issue. When we say The brain is a neural network, we are
using the phrase neural network in a very different sense.
Therefore, because the brain can do something, it doesn't mean that
what we call neural network in AI research can do the same, even
after major extensions and revisions of the existing models.


But it is interesting that most of the remaining shortcomings are also 
shortcomings of
human thought, such as the inability to insert or represent structured knowledge
accurately.  This is evidence that our models are correct.


Every shortcoming of NN I mentioned in the paper is selected exactly
because the human mind does not have the problem at all, or it is much
less a problem there. Furthermore, I mentioned it as a shortcoming of
NN because I believe it can be overcome by other AI techniques, though
I didn't discuss them in the paper. It makes no sense to criticize NN
on failing a task that nobody can accomplish.

For example, the human mind and some other AI techniques handle
structured knowledge much better than NN does.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-21 Thread Matt Mahoney
- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 21, 2006 5:25:13 PM
Subject: Re: [agi] SOTA

For example, the human mind and some other AI techniques handle
structured knowledge much better than NN does.

Is this because the brain is representing the knowledge differently than a 
classical neural network, or because the brain has a lot more memory and can 
afford to represent structured knowledge inefficiently?

I agree with the conclusion of your paper that a classical neural network is 
not sufficient to solve AGI.  The brain is much more complex than that.  But I 
think a neural architecture or a hybrid system that includes neural networks of 
some type is the right direction.  For example, Novamente (if I understand 
correctly, a weighted hypergraph) has some resemblance to a neural network

 
-- Matt Mahoney, [EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-21 Thread Pei Wang

On 10/21/06, Matt Mahoney [EMAIL PROTECTED] wrote:

- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 21, 2006 5:25:13 PM
Subject: Re: [agi] SOTA

For example, the human mind and some other AI techniques handle
structured knowledge much better than NN does.

Is this because the brain is representing the knowledge differently than a 
classical neural
network, or because the brain has a lot more memory and can afford to represent
structured knowledge inefficiently?


I believe it is mainly the former.


I agree with the conclusion of your paper that a classical neural network is 
not sufficient to solve AGI.  The brain is much more complex than that.  But I 
think a neural architecture or a hybrid system that includes neural networks of 
some type is the right direction.  For example, Novamente (if I understand 
correctly, a weighted hypergraph) has some resemblance to a neural network



Well, in that sense NARS also has some resemblance to a neural
network, as well as many other AI systems.

To me, the problem is that the current NN technique is not rich and
powerful enough to support AGI design, though many ideas behind NN are
really necessary for AGI, as I argued in my paper --- I hope the paper
is not taken by people as a pure criticism to NN, since I also listed
its advantages over traditional symbolic AI.

Even if an AGI has some similarity with a NN, it is not necessarily a
hybrid system with a NN part (though I cannot exclude that
possibility), but may take some NN ideas without most of the details,
as in the case of NARS.

Pei


-- Matt Mahoney, [EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Pei Wang

Olie,

I agree with most of your statements, however, history shows that
almost every subfield of AI has enjoyed a rapid progress period
accompanied by optimism, then followed by a long period of slow moving
accompanied by pessimism. Just think about expert system in the early
1980s and neural network in the late 1980s, or, more recently, the
rise of multi-agent system and data mining movements. Even NLP
looked promising to some people when statistical methods began to show
their power. Because of this, I'm not sure how long robotics can keep
its recent improving rate without major progress in AI in general.

I wonder if there is anyone in this list who has been actually working
in the field of robotics, and I would be very interested in learning
the causes of the recent development.

Pei

On 10/19/06, Olie Lamb [EMAIL PROTECTED] wrote:

(Excellent list there, Matt)

Although Pei Wang makes a good point that the fragmentation of AI does make
it difficult to compare projects, it is interesting+ to note the huge
differences in the movements in different narrow-AI fields.

As has already been mentioned, it is interesting+ to compare the way that
progress is very slow in areas such as NLP and Expert Systems, whereas there
is significant, albeit gradual progress in physical interaction systems.

For instance, the soccer-bots get better every year, cars can now finish
DARPA grand challenge -like events in reasonable time...  (I personally
think that we're fast approaching a critical point where the technology is
just good enough to attract more cash and hence more improvement; although
meatbags will be better traffic-drivers for a while yet, physical
interaction systems can now perform well enough for many applications)

Although the question What is State-of-the-Art? won't attract an
incontrivertibly good answer, it prompts a lot of bloody good questions that
can be answered usefully.

-- Olie
 

 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore

BillK wrote:

On 10/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Sorry, but IMO large databases, fast hardware, and cheap memory ain't
got nothing to do with it.

Anyone who doubts this get a copy of Pim Levelt's Speaking, read and
digest the whole thing, and then meditate on the fact that that book is
a mere scratch on the surface (IMO a scratch in the wrong direction,
too, but that's neither here nor there).

I saw a recent talk about an NLP system which left me stupified that so
little progress has been made since 20 years ago.

Having a clue about just what a complex thing intelligence is, has
everything to do with it.




Most normal speaking requires relatively little 'intelligence'.

Adults who take young children on foreign holidays are amazed at how
quickly the children appear to be chattering away to other children in
a foreign language.
They manage it for several reasons:
1) they don't have the other interests and priorities that adults have.
2) they use simple sentence structures and smallish vocabularies.
3) they discuss simple subjects of interest to children.


Bill, with all due respect, this flies in the face of a massive body of 
work that reveals just how incredibly complex the understanding and 
production of language actually is.


As I said before, try taking a look at the opening pages of Levelt's 
book.  He takes a fragment of conversation consisting of one question 
and one answer, then he looks at exactly what he thinks was going on 
when the respondent said (IIRC):  I I don't know the way ... play WELL 
enough, sir.


The amount of planning, at numerous levels, that had to be engaged just 
to come out with that utterance is truly staggering.  You don't have to 
duplicate the faltering prose, but on the other hand this flatering 
prose reveals that enormously sophisticated processes were at work when 
the utterance was produced, and the machine should have at least the 
same level of sophistication.  Humans are built in such a way that they 
can understand (at least implicitly) many levels of message in even as 
simple a phrase as the one above.  To be able to understand humans, 
machines would need the same degree of subtle understanding.


For you to blithely say Most normal speaking requires relatively little 
'intelligence' is just mind-boggling.


Richard Loosemore






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore

Matt Mahoney wrote:
Sorry, but IMO large databases, fast hardware, and cheap memory ain't 
got nothing to do with it.


Yes it does.  The human brain has vastly more computing power, 
memory and knowledge than all the computers we have been doing AI 
experiments on.  We have been searching for decades for shortcuts 
to fit our machines and found none.  If any existed, then why hasn't 
human intelligence evolved in insect sized brains?



If the brain used its hardware in such a way that (say) a million 
neurons were  required to implement a function that, on a computer, 
required a few hundred gates, your comparisons would be meaningless.  We 
do not know one way or the other what the equivalence ratio is, but your 
statement implicitly assumes a particular (and unfavorable) ratio:  you 
simply cannot make that statement without good reasons for assuming a 
ratio.  There are no such reasons, so the statement is meaningless.


We have been searching for decades to find shortcuts to fit our 
machines?  When you send a child into her bedroom to search for a 
missing library book, she will often come back 15 seconds after entering 
the room with the statement I searched for it and it's not there. 
Drawing definite conclusions is about equally reliable, in both these cases.


It used to be a standing joke in AI that researchers would claim there 
was nothing wrong with their basic approach, they just needed more 
computing power to make it work.  That was two decades ago:  has this 
lesson been forgotten already?




As long as knowledge accumulates exponentially (as it has been 
doing for centuries) and Moore's law holds (which it has since the
1950s), you can be sure that machines will catch up with human brains.  
When that happens, a lot of AI problems that have been stagnant for a

long time will be solved all at once.


A hundred dog's dinners maketh not a feast.


Having a clue about just what a complex thing intelligence is, has 
everything to do with it.


Which is why we will not be able to control AI after we produce it. 
It is not possible, even in theory, for a machine (your brain) 
to simulate or predict another machine with more states or greater 
Kolmogorov complexity [1].  The best you can do is duplicate the 
architecture and learning mechanisms of the brain and feed it data 
you can't examine because there is too much of it.  You will have 
built AI but you won't know how it works.


[1] Legg, Shane, (2006), Is There an Elegant Universal Theory of
Prediction?,  Technical Report
IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle
Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland.


A completely spurious argument.  You would not necessarily *need* to 
simulate or predict the AI, because the kind of simulation and 
prediction you are talking about is low-level, exact state prediction 
(this is inherent in the nature of proofs about Kolmogorov complexity).


It is entirely possible to build an AI in such a way that the general 
course of its behavior is as reliable as the behavior of an Ideal Gas: 
can't predict the position and momentum of all its particles, but you 
sure can predict such overall characteristics as temperature, pressure 
and volume.


Richard Loosemore












-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Mark Waser
It is entirely possible to build an AI in such a way that the general 
course of its behavior is as reliable as the behavior of an Ideal Gas: 
can't predict the position and momentum of all its particles, but you sure 
can predict such overall characteristics as temperature, pressure and 
volume.


Thank you!  FINALLY, someone came up with a clear succint and accurate 
rebuttal . . . .



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore

BillK wrote:

On 10/20/06, Richard Loosemore [EMAIL PROTECTED] wrote:


For you to blithely say Most normal speaking requires relatively little
'intelligence' is just mind-boggling.



I am not trying to say that language skills don't require a human
level of intelligence. That's obvious. That is what make humans human.

But day-to-day chat can be mastered by children, even in a foreign 
language.


Watch that video I referenced in my previous post, of an American
chatting to a Chinese woman via a laptop running MASTOR software.
http://www.research.ibm.com/jam/speech_to_speech.mpg

Now tell me that that laptop is showing great intelligence to
translate at the basic level of normal conversation. Simple subject
object predicate stuff. Basic everyday vocabulary.
No complex similes, metaphors, etc.

There is a big difference between discussing philosophy and saying
Where is the toilet?
That is what I was trying to point out.


I take your point, but I the problem I have with what you are saying is 
that you are making a claim about the feasibility of a restricted type 
of performance (not general language understanding, but just translation 
of very simple sentences), while at the same time trying to push the 
envelope on how useful and widely applicable that restricted performance 
actually is.


So when you say Most normal speaking requires relatively little 
'intelligence', I simply cannot buy the implications of that most. 
Most language contains subtleties that do require a good deal of 
intelligence.  People consistently underestimate the sophistication of 
child language, in particular.


I would *love* to see those IBM folks put a couple of jabbering 
four-year-old children in front of that translation system, to see how 
it likes their 'low-intelligence' language. :-)  Does anyone have any 
contacts on the team, so we could ask?


Richard Loosemore





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Matt Mahoney
- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 20, 2006 10:14:09 AM
Subject: Re: [agi] SOTA

We have been searching for decades to find shortcuts to fit our 
machines?  When you send a child into her bedroom to search for a 
missing library book, she will often come back 15 seconds after entering 
the room with the statement I searched for it and it's not there. 
Drawing definite conclusions is about equally reliable, in both these cases.

If you have figured out how to implement AI on a PC, please share it
with us.  Until then, you will need a more convincing argument that we
aren't limited by hardware.


A lot of people smarter than you or me have been working on this problem for a 
lot longer than 15 seconds.  James first proposed association models of thought 
in 1890, about 90 years before connectionist neural models were popular.  Hebb 
proposed a model of classical conditioning in which memory is stored in the 
synapse in 1949, decades before the phenomena was actually observed in living 
organisms.  By the early 1960s we had programs that could answer natural 
language queries (the 1959 BASEBALL program), translate Russian to English, 
prove theorems in geometry, solve arithmetic word problems, and recognize 
handwritten digits.

It is not that we can't come up with the right algorithms.  It's that we don't 
have the computing power to implement them.  The most successful AI 
applications today like Google require vast computing power.

If the brain used its hardware in such a way that (say) a million 
neurons were  required to implement a function that, on a computer, 
required a few hundred gates, your comparisons would be meaningless.

I doubt the brain is that inefficient.  There are lower animals that crawl with 
just a couple hundred neurons.  In higher animals, neural processing is 
expensive, so there is evolutionary pressure to compute efficiently.  Most of 
the energy you burn at rest is used by your brain.  Humans had to evolve larger 
bodies than other primates to support our larger brains.  In most neural 
models, it takes only one neuron to implement a logic gate and only one synapse 
to store a bit of memory.

It used to be a standing joke in AI that researchers would claim there 
was nothing wrong with their basic approach, they just needed more 
computing power to make it work.  That was two decades ago:  has this 
lesson been forgotten already?

I don't see why this should not still be true.  The problem is we still do not 
know just how much computing power is needed.  There is still no good estimate 
of the number of synapses in the human brain.  We only know it is probably 
between 10^12 to 10^15 and we aren't even sure of that.  So when AI is solved, 
it will probably be a surprise.


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Josh Treadwell




Bill, Richard, etc,
Children don't have a great grasp of language, but they have all the
sensory and contextual mechanisms to learn a language by causal
interaction with their environment. Semantics are a learned system,
just as words are. In current AI we're programming semantic rules into
a huge neural database, and asking it to play an big matching game.
These two types of learning may give the same result, but it's not the
same process by a long shot. Every time we logically code an
algorithm, we're only mimicking the logic function of a learned neural
process, which doesn't allow the tiered complexity and concept grasping
that sensory learning does.  

Because language uses discrete semantic rules, it's easy to fall into
the trap of thinking computers, given enough horsepower, are capable of
human thought. Give a computer as many semantic algorithms, metaphor
databases, and reaction grading mechanisms as you want, but it takes
much deeper and differentiated networks to apply those words and derive
a
physical meaning beyond grammatical or metaphorical boundaries. This
is
the difference between a system that resembles intelligence, and an
intelligent system. The resembling system is only capable of
processing information based on algorithms, and not reworking an
algorithm based on the reasoning for executing the function.
Whether our AGI is conscious or not, it could still be functionally
equivalent to a human mind in terms of output. The recursive
bidirectional nature of neurons and their relation to forming a gestalt
is something we're barely able to grasp as a concept, let alone code
for. The nature of our hardware is going to have to change to
accommodate these multidimensional and recursive problems in
computing. 





Josh Treadwell
 Systems Administrator
 [EMAIL PROTECTED]
 direct:480.206.3776

C.R.I.S. Camera Services
250 North 54th Street
Chandler, AZ 85226 USA
p 480.940.1103 / f 480.940.1329
http://www.criscam.com



BillK wrote:
On 10/20/06, Richard Loosemore [EMAIL PROTECTED]
wrote:
  
  
For you to blithely say "Most normal speaking requires relatively
little

'intelligence'" is just mind-boggling.


  
  
I am not trying to say that language skills don't require a human
  
level of intelligence. That's obvious. That is what make humans human.
  
  
But day-to-day chat can be mastered by children, even in a foreign
language.
  
  
Watch that video I referenced in my previous post, of an American
  
chatting to a Chinese woman via a laptop running MASTOR software.
  
http://www.research.ibm.com/jam/speech_to_speech.mpg
  
  
Now tell me that that laptop is showing great intelligence to
  
translate at the basic level of normal conversation. Simple subject
  
object predicate stuff. Basic everyday vocabulary.
  
No complex similes, metaphors, etc.
  
  
There is a big difference between discussing philosophy and saying
  
"Where is the toilet?"
  
That is what I was trying to point out.
  
  
Billk
  
  
-
  
This list is sponsored by AGIRI: http://www.agiri.org/email
  
To unsubscribe or change your options, please go to:
  
http://v2.listbox.com/member/[EMAIL PROTECTED]
  
  


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.



Re: [agi] SOTA

2006-10-20 Thread justin corwin

I want to strongly agree with Richard on several points here, and
expand on them a bit in light of later discussion.

On 10/20/06, Richard Loosemore [EMAIL PROTECTED] wrote:

It used to be a standing joke in AI that researchers would claim there
was nothing wrong with their basic approach, they just needed more
computing power to make it work.  That was two decades ago:  has this
lesson been forgotten already?


This is very true then, and continues to be now. For those who use the
explanation of insufficient computing power, I would question what
approaches you would expect to be viable at higher computing power?
How do they scale? Why would they work better with more computation?

Relatedly, very very few AI research programmes operate in strict real
time. Many use batch processes, or virtual worlds, or automated
interaction scripts. It would be trivial to modify these systems to
behave as if they had 10 times as much computational power, or a
thousand times. Even if it took 1,000,000 seconds(11 1/2 days) for
every second of intelligent behavior with currently available
computing power, the results would be worth it, and unmistakeable, if
true.

I suspect that this would not work, as simply increasing computing
power would not validate current AI systems.


A completely spurious argument.  You would not necessarily *need* to
simulate or predict the AI, because the kind of simulation and
prediction you are talking about is low-level, exact state prediction
(this is inherent in the nature of proofs about Kolmogorov complexity).


This very important, and I strongly agree that analysis of this kind
is unhelpful. It's easy to show that heat engines and turbines and all
sorts of things are so insanely complex that they can't possibly be
modeled in the general case. But we needn't do so. We are interested
in the behavior of certain parameters of such systems, and we can
reduce the space of the systems we investigate(very few people build
turbines with disconnected parts, or assymetrical rotation, for
example).


It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you
sure can predict such overall characteristics as temperature, pressure
and volume.


This is the only claim in this message I have any disagreement with
(which must be some sort of record given my poor history with
Richard). I agree that its true in principle that AIs can be made this
way, but I'm not yet convinced that it's possible in practice.

It may be that the goals of and motivations from such artificial
systems are not one of those characteristics that lies on the surface
of such boiling complexity, but within it. I have the same
disagreement with Eliezer, about the certainty he places on the future
characteristics of AIs, given that no one here is describing the
behavior of a specific AI system, such conclusions strike me as
premature, but perhaps not unwarrented.

--
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Philip Goetz

On 10/19/06, Olie Lamb [EMAIL PROTECTED] wrote:

For instance, the soccer-bots get better every year, cars can now finish
DARPA grand challenge -like events in reasonable time...  (I personally
think that we're fast approaching a critical point where the technology is
just good enough to attract more cash and hence more improvement; although
meatbags will be better traffic-drivers for a while yet, physical
interaction systems can now perform well enough for many applications)


I used to work some with NASA on Free Flight issues.  Free Flight is a
grand plan to let pilots choose their own routes.  Part of it involves
computer-controlled conflict resolution, meaning that instead of
having a guy with a radar screen giving instructions to pilots to keep
them from crashing into each other, their onboard computers do it.

Computers have been able to handle every aspect of flying, from
takeoff, piloting, collision-avoidance, routing, and landing, better
than people, for a long time now.  But nobody will let computers do
it.  Part of the issue is reliability - you need to have less than a
one in a million chance of error to get away with zero errors per
year, in the US.  Although air traffic controllers in the US today
commit tens of thousands of errors per year; most of them have no
consequences because airspace is big.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Philip Goetz

On 10/20/06, Josh Treadwell [EMAIL PROTECTED] wrote:


The resembling system is only capable of processing information based on
algorithms, and not reworking an algorithm based on the reasoning for
executing the function.


This appears to be the same argument Spock made in an old Star Trek
episode, that the computer chess-player could never beat the person
who programmed it.  Note to the world:  It is wrong.  Please stop
using this argument.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Philip Goetz

On 10/19/06, Peter Voss [EMAIL PROTECTED] wrote:

I'm often asked about state-of-the-art in AI, and would like to get some
opinions.

What do you regard, or what is generally regarded as SOTA in the various AI
aspects that may be, or may be seen to be relevant to AGI?



- NLP components such as parsers, translators, grammar-checkers


AFAIK, our group here at NIH has the state-of-the-art in extracting
knowledge from text.  We have a system that reads medical abstracts -
not fake or hand-chosen texts, but horribly messy run-on sentences
with generally correct but reprehensible grammar - and extracts
predicates, like so:

Qinghaosu and its derivatives are rapidly effective antimalarial drugs
derived from a Chinese plant.
- Qinghaosu treats malaria

Kava as an anticraving agent: preliminary data.
- Kava treats craving

Leukotriene receptor antagonists and synthesis inhibitors are a new
class of anti-inflammatory drugs that have clinical efficacy in the
management of asthma, allergic rhinitis and inflammatory bowel
disease.
- receptor antagonists treat asthma
   receptor antogonists treat allergic rhinitis
   receptor antogonists treat inflammatory bowel disease
   synthesis inhibitors treat asthma
   ...

Unfortunately, the secret to getting such good performance is to have
lots and lots of domain knowledge, and lots of linguists studying lots
of different grammatical constructions, and lots of coders writing
code to parse each construction, and do this for many years.  This
project is in its 15th year.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Matt Mahoney
- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 20, 2006 3:35:57 PM
Subject: Re: [agi] SOTA

On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:

 It is not that we can't come up with the right algorithms.  It's that we 
 don't have the
 computing power to implement them.

Can you give us an example? I hope you don't mean algorithms like
exhaustive search.

For example, neural networks which perform rudamentary pattern detection and 
control for vision, speech, language, robotics etc.  Most of the theory had 
been worked out by the 1980's, but applications have been limited by CPU speed, 
memory, and training data.  The basic building blocks were worked out much 
earlier.  There are only two types of learning in animals, classical 
(association) and operant (reinforcement) conditioning.  Hebb's rule for 
classicical condioning proposed in 1949 is the basis for most neural network 
learning algorithms today.  Models of operant conditioning date back to W. Ross 
Ashby's 1960 Design for a Brain where he used randomized weight adjustments 
to stabilize a 4 neuron system build from vacuum tubes and mechanical 
components.

Neural algorithms are not intractable.  They run in polynomial time.  Neural 
networks can recognize arbitrarily complex patterns by adding more layers and 
training them one at a time.  This parallels the way people learn complex 
behavior.  We learn simple patterns first, then build on them.

 The most successful AI applications today like Google require vast computing 
 power.

In what sense do you call Google an AI application?

Google does pretty well with natural language questions like how many days 
until xmas? even though they don't advertise it that way (like Ask Jeeves did) 
and most people don't use it that way.

Of course you might say that Google isn't doing AI, it is just matching query 
terms to documents.  But it is always that way.  Once you solve the problem, 
it's not AI any more.  Deep Blue isn't AI.  It just implements a chess playing 
algorithm in fast hardware.  Suppose we decide the easiest way to build a huge 
neural network is to use real neurons and some genetic engineering.  Is that AI?
 
-- Matt Mahoney, [EMAIL PROTECTED]





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Josh Treadwell




Philip Goetz wrote:
On 10/20/06, Josh Treadwell [EMAIL PROTECTED]
wrote:
  
  
The resembling system is only capable of processing information based
on

algorithms, and not reworking an algorithm based on the reasoning for

executing the function.

  
  
This appears to be the same argument Spock made in an old Star Trek
  
episode, that the computer chess-player could never beat the person
  
who programmed it. Note to the world: It is wrong. Please stop
  
using this argument.
  

It's not the same. A chess program is merely comparing outcomes and
percentages, while adapting algorithmically to play styles. It's a
discrete system within which logically written functions are executed.
Yes, it adapts to moves and keeps a track of which moves are going on,
but there is no higher order AI that is thinking "out of the box" about
the problem. It simply approaches, computes based on a database of
moves, and weighs it's advantages and disadvantages. A chess program
never reworks it's strategy based on it's own reasoning of why it's
playing. It just does, and does well. Yes it could beat us, but it's
akin to saying a calculator is faster at math than we are.

Josh Treadwell
 Systems Administrator
 [EMAIL PROTECTED]
 direct:480.206.3776

C.R.I.S. Camera Services
250 North 54th Street
Chandler, AZ 85226 USA
p 480.940.1103 / f 480.940.1329
http://www.criscam.com


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.



Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore

Matt Mahoney wrote:

From: Pei Wang [EMAIL PROTECTED]

On 10/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:


It is not that we can't come up with the right algorithms.  

 It's that we don't have the

computing power to implement them.



Can you give us an example? I hope you don't mean algorithms like
exhaustive search.


For example, neural networks which perform rudamentary pattern 
detection and control for vision, speech, language, robotics etc.  
Most of the theory had been worked out by the 1980's, but 
applications have been limited by CPU speed, memory, and training 
data.  The basic building blocks were worked out much earlier.  
There are only two types of learning in animals, classical 
(association) and operant (reinforcement) conditioning.  
Hebb's rule for classicical condioning proposed in 1949 is 
the basis for most neural network learning algorithms today.  
Models of operant conditioning date back to W. Ross Ashby's 
1960 Design for a Brain where he used randomized weight 
adjustments to stabilize a 4 neuron system build from vacuum 
tubes and mechanical components.


Neural algorithms are not intractable.  They run in polynomial time.  
Neural networks can recognize arbitrarily complex patterns by adding 
more layers and training them one at a time.  This parallels the 
way people learn complex behavior.  We learn simple patterns first, 
then build on them.



I initially wrote a few sentences saying what was wrong with the above, 
but I chopped it.  There is just no point.


What you said above is just flat-out wrong from beginning to end.  I 
have done research in that field, and taught postgraduate courses in it, 
and what you are saying is completely divorced from reality.






Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-20 Thread Richard Loosemore

justin corwin wrote:



It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you
sure can predict such overall characteristics as temperature, pressure
and volume.


This is the only claim in this message I have any disagreement with
(which must be some sort of record given my poor history with
Richard). I agree that its true in principle that AIs can be made this
way, but I'm not yet convinced that it's possible in practice.

[Heck, yeah:  I had to check to see if maybe you were some other justin 
corwin ;-)]


I agree with your caution about that claim.

For what its worth, though, what I had in mind was a motivational system 
that acts like a point moving around in an unconstrained way in a 
multidimensional space (hence my thermodynamics metaphor), BUT with the 
shape of that space heavily distorted so that it has one enormously deep 
basin of attraction.  To be stable and safe, it must stay in the basin. 
 Can it get out?  Yes, in principle.  Mean time between escapes from 
the basin?  Greater than lifetime of the universe by a huge factor.  Is 
it therefore guaranteed to be safe?  No.  Is it ever likely to get out? 
 Not a chance.


(I haven't said how to do this.  What I just gave was a summary of 
overall dynamics of a particular system).


In the same way, all the molecules of an ideal gas could, in theory, 
just happen to divide into two equal chunks and head toward opposite 
ends of the box at full speed.  They have the freedom to do so.  We just 
don't have to worry about it happening, for obvious reasons.


The beauty of the approach is that the system can know perfectly well 
that it has been engineered that way by us.  But it does not care:  if 
it starts out being empathic to humanity, it will not deliberately or 
accidentally change itself to contradict that initial goal.  It will 
have plenty of freedom to do so in principle, but in practice it will 
know that the consequences could be disastrous.


I realise that I have been tempted to explain an idea in partial, 
cryptic terms (laying myself open to requests for more detail, or 
scorn), so apologies if the above seems opaque.  More when I get the time.


Richard Loosemore.


It may be that the goals of and motivations from such artificial
systems are not one of those characteristics that lies on the surface
of such boiling complexity, but within it. I have the same
disagreement with Eliezer, about the certainty he places on the future
characteristics of AIs, given that no one here is describing the
behavior of a specific AI system, such conclusions strike me as
premature, but perhaps not unwarrented.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread Pei Wang

Peter,

I'm afraid that your question cannot be answered as it is. AI is
highly fragmented, which not only means that few project is aiming at
the whole field, but also that few is even covering a subfield as you
listed. Instead, each project usually aims at a special problem under
a set of special assumptions. Consequently, it is not always
meaningful to compare them in functionality.

For example, many people may agree that Stanley the Volkswagen
represents the SOTA in robot car, but is it SOTA in Interactive
robotics systems? Is it ahead of Cog? When common-sense KB is
mentioned, people will think about Cyc, but is it SOTA? If it is not,
which one is? How can we compare an inference engine based on
first-order predicate calculus to one on Bayesian net?

Of course, in each field, there are projects that are more typical,
more influential, or more interesting than the rest, but they are not
really SOTA in the sense that it is ahead of the others in
functionality, since the others are usually running to different
directions.

In your list, NLP may be an exception to what I said above. Since I'm
not an expert in that field, I won't try to answer.

By definition, Integrated intelligent systems should be comparable,
but clearly there is no consensus on this topic yet. ;-)

Pei

On 10/19/06, Peter Voss [EMAIL PROTECTED] wrote:

I'm often asked about state-of-the-art in AI, and would like to get some
opinions.

What do you regard, or what is generally regarded as SOTA in the various AI
aspects that may be, or may be seen to be relevant to AGI?

For example:

- Comprehensive (common-sense) knowledge-bases and/or ontologies
- Inference engines, etc.
- Adaptive expert systems
- Question answering systems
- NLP components such as parsers, translators, grammar-checkers
- Interactive robotics systems (sensing/ actuation) - physical or virtual
- Vision, voice, pattern recognition, etc.
- Interactive learning systems
- Integrated intelligent systems
... whatever ...

I'm looking for the best functionality -- irrespective of proprietary,
open-source, or academic.



Peter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread Matt Mahoney
- Comprehensive (common-sense) knowledge-bases and/or ontologies


Cyc/OpenCyc, Wordnet, etc. but there seems to be no good way for applications 
to use this information and no good alternative to hand coding knowledge.

- Inference engines, etc.

- Adaptive expert systems


A dead end.  There has been little progress since the 1970's.

- Question answering systems


Google.

- NLP components such as parsers, translators, grammar-checkers


Parsing is unsolved.  Translators like Babelfish have progressed little since 
the 1959 Russian-English project.  Microsoft Word's grammar checker catches 
some mistakes but is clearly not AI.

- Interactive robotics systems (sensing/ actuation) - physical or virtual


The Mars Rovers and the DARPA Grand Challenge (robotic auto race) are 
impressive but we clearly have a long way to go before your car drives itself.

- Vision, voice, pattern recognition, etc.


It is difficult to say about face recognition systems, because of their use in 
security, accuracy rates are secret.  I believe they have been oversold.  Voice 
recognition is limited to words and short phrases until we develop better 
language models with AI behind them.  A keyboard is still faster than a 
microphone.

- Interactive learning systems

- Integrated intelligent systems


Lots of theoretical results, but no real applications.

-- Matt Mahoney, [EMAIL PROTECTED]





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread BillK

On 10/19/06, Matt Mahoney wrote:


- NLP components such as parsers, translators, grammar-checkers


Parsing is unsolved.  Translators like Babelfish have progressed little since 
the 1959
Russian-English project.  Microsoft Word's grammar checker catches some mistakes
but is clearly not AI.




http://www.charlotte.com/mld/charlotte/news/nation/15783022.htm

American soldiers bound for Iraq equipped with laptop translators
Called the Two Way Speech-to-Speech Program, it's a translator that
uses a computer to convert spoken English to Iraqi Arabic and vice
versa.
-

If it is life-or-death, it must work pretty well.   :)

I believe this is based on the IBM MASTOR project.
http://domino.watson.ibm.com/comm/research.nsf/pages/r.uit.innovation.html

MASTOR's innovations include: methods that automatically extract the
most likely meaning of the spoken utterance, store it in a tree
structured set of concepts like actions and needs, methods that
take the tree-based output of a statistical semantic parser and
transform the semantic concepts in the tree to express the same set of
concepts in a way appropriate for another language; methods for
statistical natural language generation that take the resultant set of
transformed concepts and generate a sentence for the target language;
generation of proper inflections by filtering hypotheses with an
n-gram statistical language model; etc


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread YKY (Yan King Yin)

Hi Peter,

I think in all of the categories you listed, thereshould be a lot ofprogress, but they will hit a ceiling because of the lack of an AGI architecture.

It is very clear that vision requires AGI to be complete. So does NLP. In vision, many objects require reasoning to recognize.NLP also requires reasoning to interpret metaphors, which are beyond the scope of current parsers.


So thegoal is for vision/NLP researchers to work within some AGI framework.Unfortunately a standard framework isunavailable now. We may start such a framework;lying out the common knowledge representation would be most important.


This also shows theneed formodularity and divide-and-conquer. AGI sub-problems like vision and NLP are themselves pretty big projects. So it maybeunwise to try to solve them all alone.


I think other candidates that have the potential tobecome AGI are: Cyc, Soar, ACT-R, andother less known cognitive architectures.

YKY

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread Matt Mahoney
- Original Message 
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 19, 2006 11:43:46 AM
Subject: Re: [agi] SOTA

On 10/19/06, Matt Mahoney wrote:

 - NLP components such as parsers, translators, grammar-checkers


 Parsing is unsolved.  Translators like Babelfish have progressed little since 
 the 1959
 Russian-English project.  Microsoft Word's grammar checker catches some 
 mistakes
 but is clearly not AI.



http://www.charlotte.com/mld/charlotte/news/nation/15783022.htm

I think the problem will eventually be solved.  There was a long period of 
stagnation since the 1959 Russian-English project but I think this period will 
soon end thanks to better language models due to the recent availability of 
large text databases, fast hardware, and cheap memory.  Once we solve the 
language modeling problem, we will remove the main barrier to many NLP problems 
such as speech recognition, translation, OCR, handwriting recognition, and 
question answering.  Google has made good progress in this area using 
statistical modeling methods and was top ranked in a recent competition.  
Google has access to terabytes of text in many languages and a custom operating 
system for running programs in parallel on thousands of PCs.  Here is Google's 
translation of the above article into Arabic and back to English.  But as you 
can see, the job isn't finished.

American soldiers heading to Iraq with a laptop translators from
Stephanie Hinatz daily newspapers (Newport News,va. (ethnic)نورفولكVa.
army-star trip now using similar instrument in Iraq to help the forces
of language training without contact with Iraqi civilians and the
training of the country's emerging police and military forces. the name
of a double discourse to address Albernamjoho translator, which uses
computers to convert spoken English Iraqi pwmound and vice versa. while
the program is still technically in the research and development
stage,Norfolk-based U.S. Joint Forces Command,in conjunction with the
Defense Advanced Research projects Agency,some models has been sent to
Iraq, 70 troops is used in tactical environments to evaluate its
effectiveness. and so far is fine and said Wayne Richards,Commander
leadership in the implementation section. the need for such a device
for the first time in April 2004 when the joint forces command received
an urgent request from commanders on the ground in Abragherichards.
soldiers on the ground needed to improve communication with the Iraqi
people. But because of the shortage of linguists and translators
throughout the Department of Defense do not come from the
difficult,even some of the forces of the so-called most important work
in Iraq today in Iraq, the training of police and military forces. get
those troops trained and capable of maintaining the security of the
country itself is a reminder of return for service members to continue
der inside and outside the war zone. experts are trying to develop this
kind of technical translation for 10 years,He said that Richards.
today, in its current form,The translator is the rugged laptop with the
plugs are two or loudspeakers and Alsmaatrichards pointing to a model
and convert. It is also easy to use Talking on the phone,as evidenced
shortly after the Norfolk demonstration Tuesday. I tell you, an Iraqi
withdrawal on a computer. you put the microphone up to your mouth. when
he said :We are here to provide food and water for your family, You
held by the E key to security in a painting keys. you,I wrote to you
the text of what we discussed to delight on the screen. you wipe the
words to make sure you get exactly. If you can change it manually. when
you are convinced you to the t key to the interpretation and sentence
looming on the screen once Achrihzh time in Arab Iraq. the computer
also says his loud speakers through. the process is the same Balanceof
those who did not talk to you. I repeat what you have and the Arab
computer will spit on you, the words in the English language. as do
translator rights,the program assumes some meanings.  not 100%
Richards. when I ask,For example,Can the newspaper today, the
Arab-language Alanklizihaltrgmeh direct Can the newspaper today.
because in any act made in every conversation with the translator is
taken. any translation is not due to the past program. Defense Language
Institute in California also true of all the translations and Richards.
now,because of its size,the best place to use the translator is at the
center of command and control or a classroom. It is unlikely that the
average Navy will be overseeing the cart with 100 pounds of equipment
to implement that attacks in Baghdad, in Sadr City. We hope if the days
will be small enough that the sergeant to be implemented in a skirt.
Think about it and Richards. sergeant beating on the door of the house
formulateseen in Fallujah. a woman answers the door. The soldier's
weapon. because it is afraid. the soldier immediately to the effects
translator

Re: [agi] SOTA

2006-10-19 Thread Richard Loosemore

Matt Mahoney wrote:

From: BillK [EMAIL PROTECTED]



Parsing is unsolved.  Translators like Babelfish have progressed little since 
the 1959
Russian-English project.  Microsoft Word's grammar checker catches some mistakes
but is clearly not AI.

I think the problem will eventually be solved.  There was a long period of 
stagnation since the 1959 Russian-English project but I think this period 
 will soon end thanks to better language models due to the recent 
availability

 of large text databases, fast hardware, and cheap memory.  Once we solve
 the language modeling problem, we will remove the main barrier to 
many NLP

 problems such as speech recognition, translation, OCR, handwriting
 recognition, and question answering.

Sorry, but IMO large databases, fast hardware, and cheap memory ain't 
got nothing to do with it.


Anyone who doubts this get a copy of Pim Levelt's Speaking, read and 
digest the whole thing, and then meditate on the fact that that book is 
a mere scratch on the surface (IMO a scratch in the wrong direction, 
too, but that's neither here nor there).


I saw a recent talk about an NLP system which left me stupified that so 
little progress has been made since 20 years ago.


Having a clue about just what a complex thing intelligence is, has 
everything to do with it.






Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread BillK

On 10/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Sorry, but IMO large databases, fast hardware, and cheap memory ain't
got nothing to do with it.

Anyone who doubts this get a copy of Pim Levelt's Speaking, read and
digest the whole thing, and then meditate on the fact that that book is
a mere scratch on the surface (IMO a scratch in the wrong direction,
too, but that's neither here nor there).

I saw a recent talk about an NLP system which left me stupified that so
little progress has been made since 20 years ago.

Having a clue about just what a complex thing intelligence is, has
everything to do with it.




Most normal speaking requires relatively little 'intelligence'.

Adults who take young children on foreign holidays are amazed at how
quickly the children appear to be chattering away to other children in
a foreign language.
They manage it for several reasons:
1) they don't have the other interests and priorities that adults have.
2) they use simple sentence structures and smallish vocabularies.
3) they discuss simple subjects of interest to children.

The new IBM MASTOR system seems to be better than Babelfish. IBM are
just starting on widespread commercial marketing of the system. Aiming
at business travellers, apparently.

MASTOR project description
http://domino.watson.ibm.com/comm/research.nsf/pages/r.uit.innovation.html

Here is a pdf file describing the MASTOR system in more detail
http://acl.ldc.upenn.edu/W/W06/W06-3711.pdf

Here is a 12MB mpg download of the system in use. Simple speech, but impressive.
http://www.research.ibm.com/jam/speech_to_speech.mpg

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA

2006-10-19 Thread Olie Lamb
(Excellent list there, Matt)Although Pei Wang makes a good point that the fragmentation of AI does make it difficult to compare projects, it is interesting+ to note the huge differences in the movements in different narrow-AI fields.
As has already been mentioned, it is interesting+ to compare the way that progress is very slow in areas such as NLP and Expert Systems, whereas there is significant, albeit gradual progress in physical interaction systems.
For instance, the soccer-bots get better every year, cars can now finish DARPA grand challenge -like events in reasonable time... (I personally think that we're fast approaching a critical point where the technology is just good enough to attract more cash and hence more improvement; although meatbags will be better traffic-drivers for a while yet, physical interaction systems can now perform well enough for many applications)
Although the question What is State-of-the-Art? won't attract an incontrivertibly good answer, it prompts a lot of bloody good questions that can be answered usefully.-- Olie

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] SOTA

2006-10-18 Thread Peter Voss
I'm often asked about state-of-the-art in AI, and would like to get some
opinions.

What do you regard, or what is generally regarded as SOTA in the various AI
aspects that may be, or may be seen to be relevant to AGI?

For example: 

- Comprehensive (common-sense) knowledge-bases and/or ontologies
- Inference engines, etc.
- Adaptive expert systems
- Question answering systems
- NLP components such as parsers, translators, grammar-checkers
- Interactive robotics systems (sensing/ actuation) - physical or virtual
- Vision, voice, pattern recognition, etc.
- Interactive learning systems
- Integrated intelligent systems
... whatever ...

I'm looking for the best functionality -- irrespective of proprietary,
open-source, or academic.



Peter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-31 Thread Brian Atkins
AMD demonstrates the first x86 dual-core processor
http://www.digitimes.com/news/a20040831PR200.html
Confirms it will re-use the current Opteron 940-pin socket
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-24 Thread Brian Atkins
Opteron system are definitely the sweet spot currently, and for the near 
future. Rumors are that major server companies are working on 32-way 
systems to be released soon. Also of course Cray bought that OctigaBay 
company and now has this:

http://www.cray.com/products/systems/xd1/
Also rumor has it that when the dual-core Opterons come out in 2H 2005 
you will be able to drop them into most existing motherboards/servers 
that you can buy right now. So upgradability looks potentially excellent.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-24 Thread J. Andrew Rogers
David Hart wrote:
 Because the memory controller resides on the CPU in 
 Opteron systems, all 8 CPUs must be populated, but
 this can be achieved with the slowest/cheapest model,
 the Opteron 840 (1.4 GHz). 


I would second using the cheapest CPU part available, which currently is
the 1.6GHz part, you'll save pocket change by going with the 1.4GHz part
(for the 2xx series, the difference is about $10).

The low clock speed is deceptive.  If you use one of the AMD64 optimized
compilers (e.g. http://www.pathscale.com) rather than GCC, the real
performance is stunning even on slow CPU parts -- it is as fast or
faster than pretty much anything else in the general case.  For SMP
codes, the only thing comparable is the Unix Big Iron (e.g. IBM's Power
series boxen) in terms of how it scales across multiple processors.  Of
all the different architectures I touch, the Opteron is my favorite. 
Top-notch Big Iron performance at commodity prices.  Itanium is a bit
better for floating point (PPC970 only for DSP codes), but not much and
it is worse at a lot of other codes and expensive.  It is worth noting
that the Intel AMD64-compatible chips will be ISA compatible, but
missing all the features that make the Opterons scalable.

And as Brian noted, it has an aggressive looking roadmap.  There is a
new version of HyperTransport coming out relatively soon (maybe first
part of next year?) which will increase the scalability even more, and
the multi-core CPUs should be with us shortly.  As the big server
vendors put more and more money into big Opteron boxes, Linux being the
default OS, this looks like the bang/buck champion for the foreseeable
future.  64-bit Windows may be viable, but I have no idea if it will run
well on the bigger systems as a practical matter.  The Windows VM
continues to be funky from what I understand, though I have no personal
experience.

j. andrew rogers



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-23 Thread J. Andrew Rogers
Shane wrote:
 As for more indirect solutions like RAM disks... I think you would
 loose at least a factor of ten in speed compared to simply accessing
 system memory directly as you would need to go through the file system
 and out to an external device with RAM in it pretending to be a disk.
 If I remember correctly the fastest disk interfaces you can get for a
 PC are in the order of 100 MB per second while system RAM is more like
 a few GB per second.  So if you need really high speed then RAM is
 perhaps your only option I think.


For RAM, as with disk, it is more about latency than bandwidth.  Even
with contrived codes, you can't drive RAM at 100% bandwidth even on
architectures with extremely low latency (e.g. Opterons), and on more
average architectures (e.g. PPC970) it is not even remotely close to
theoretical. For big memory systems a la ccNUMA, the performance is
less.  A big part of the reason is that the latency limits the number of
requests to core that you can make per second regardless of the
theoretical bandwidth.  The disk may have less bandwidth, about 250MB
per channel on a normal high-end HBA, but it can drive that close 100%
and it is trivial to run multiple channels in parallel which will get
you in the same region of real RAM bandwidth-wise.  The big downside is
that it gets this performance because it works on big chunks of data
compared to real RAM, so the number of discrete fetches per second will
probably be an order of magnitude less than system core even under ideal
circumstances.

I think the biggest argument against using a big RAM disk array is that
the HBAs and drivers are optimized for very different types of things
than a RAM extension.  When you get into big ccNUMA systems, the real
memory latencies start to creep within spitting distance of network DMA,
arguably the most interesting low-cost alternative.

j. andrew rogers


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-21 Thread Milon Krejca
Flash memories could not be used as memory - they withstand no more than 
100.000 rewrites. (I do not recall this properly it could be even less 
then 10.000)

Milon
Lukasz Kaiser wrote:
Hi.
 

Given that your core system is C# this could be a bit of a problem. 
   

Just to put my 2c, if you should have the idea to try .NET under linux, 
better first do some tests. In my experience the linux .NET runtime (mono),
although almost fully compatible, is about 5-10 time slower than that on 
Windows and does not handle over 1GB of memory even if it is there 
(on 32 bit system at least).

Does anyone know how flash drives perform in such setting, can they
be used as a slower alternative to RAM ?
- lk
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-21 Thread Peter Voss
Visual Studio (beta) with 64-bit (c#) compilation is available now:

http://lab.msdn.microsoft.com/vs2005/productinfo/productline/

as is Windows XP 64-bit for testing.

That's all one needs for development.

Peter




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Shane
Sent: Friday, August 20, 2004 9:57 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] SOTA 50 GB memory for Windows



Well I guess I have become skeptical about when they will
release such a thing as they have been saying that they will
put out a 64 bit version for Intel for years now but then
always pushing back the release date.  No doubt it's related
to the difficulties Intel has been having with Itanium.
If Intel deliver their AMD compatible 64 bit chips reasonably
soon then surely a 64 bit Windows release can't be too far
away.

The other thing is that when something as big as this changes
in the OS it can take a while for various things to straighten
themselves out.  Things like development tools, devices drivers
and so on.  I guess for you the key thing is when they will
deliver a 64 bit version of C# and associated tools.

Curiously, you could get 64 bit Windows for Alpha CPUs about
8 years ago!  A friend of mind used to develop things for it
way back then, but that version of Windows was eventually
killed off by Microsoft.

Shane

Peter Voss wrote:
 Microsoft Updates 64-bit Windows XP Preview Editions

 http://www.eweek.com/article2/0,1759,1637471,00.asp

 Peter

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.742 / Virus Database: 495 - Release Date: 8/19/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.742 / Virus Database: 495 - Release Date: 8/19/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
What are the best options for large amounts of fast memory for Windows-based
systems?

I'm looking for price  performance (access time) for:

1) Cached RAID
2) RAM disks
3) Internal RAM (using 64 bit architecture?)
4) other

Thanks for any info.

Peter

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread J. Andrew Rogers
 I'm looking for price  performance (access time) for:
 
 1) Cached RAID


This will be useless for runtime VM or pseudo-VM purposes.  RAID cache
isolates the application from write burst bottlenecks when syncing disks
(e.g. checkpointing transaction logs), but that's about it.  For flatter
I/O patterns, you'll lose 3-4 orders of magnitude access time over
non-cached main memory and it won't be appreciably faster than raw
spindle.  Wrong tool for the application.


 2) RAM disks


Functionally workable, but very expensive.  It is much cheaper per GB to
buy the biggest RAM chips you can find and put them on the motherboard.
 The primary advantage is that you can scale it to very large sizes
while only losing somewhere around an order of magnitude versus main
core if done well.


 3) Internal RAM (using 64 bit architecture?)


The best performing, and relatively cheap too.  You can slap 32 GB of
RAM in an off-the-shelf Opteron system for not much money.  The biggest
problem is finding motherboards with loads of memory slots and the fact
that there is a hard upper bound on how much memory a given system will
support.


 4) other


Nothing I can think of that will work with Windows.  There are other
performant and cost-effective options for Linux/Unix systems.


A compromise might be to max out system RAM within reason (e.g. using
2GB DIMMs), and then using RAM disks on a fast HBA to get the rest of
your capacity.  All of this will require a 64-bit OS to be efficient.


j. andrew rogers


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
Thanks Andrew.

I didn't realize that RAID cache doesn't help on reads (like RAM disks do).
Just how expensive is a high-performance 50GB RAM disk system?

Off hand, anyone know progress/ETA on Intel EM64T for .net laguages (c#) ?

Also, what Windows compatible machines offer the most RAM ?  (Dell seems to
max out at 8Gb)

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of J. Andrew Rogers
Sent: Friday, August 20, 2004 2:12 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] SOTA 50 GB memory for Windows


 I'm looking for price  performance (access time) for:

 1) Cached RAID


This will be useless for runtime VM or pseudo-VM purposes.  RAID cache
isolates the application from write burst bottlenecks when syncing disks
(e.g. checkpointing transaction logs), but that's about it.  For flatter
I/O patterns, you'll lose 3-4 orders of magnitude access time over
non-cached main memory and it won't be appreciably faster than raw
spindle.  Wrong tool for the application.


 2) RAM disks


Functionally workable, but very expensive.  It is much cheaper per GB to
buy the biggest RAM chips you can find and put them on the motherboard.
 The primary advantage is that you can scale it to very large sizes
while only losing somewhere around an order of magnitude versus main
core if done well.


 3) Internal RAM (using 64 bit architecture?)


The best performing, and relatively cheap too.  You can slap 32 GB of
RAM in an off-the-shelf Opteron system for not much money.  The biggest
problem is finding motherboards with loads of memory slots and the fact
that there is a hard upper bound on how much memory a given system will
support.


 4) other


Nothing I can think of that will work with Windows.  There are other
performant and cost-effective options for Linux/Unix systems.


A compromise might be to max out system RAM within reason (e.g. using
2GB DIMMs), and then using RAM disks on a fast HBA to get the rest of
your capacity.  All of this will require a 64-bit OS to be efficient.


j. andrew rogers


---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread J. Andrew Rogers
 I didn't realize that RAID cache doesn't help on reads (like RAM disks
do).


Yeah, a lot of people have never really thought about it much.  I've
worked with database servers for years though, where we actually tuned
that type of hardware.

The main difference is that a write doesn't return a block to the
application, so it can return immediately without physically writing to
disk, just putting the blocks in the RAM cache until it has some spare
iops to burn or the cache becomes full.  For reads though, you have to
block until you have physically pulled the block off the disk so that
you have something to return.

RAID controllers do support predictive read-ahead caching, but that only
 makes a difference if you have sequential access patterns e.g.
streaming large files.  Otherwise, it has to block until it pulls data
off the spindle every time because it doesn't know what you'll ask for
next (unless you get very lucky and the data is in cache).

So it is primarily good for streaming large files (e.g. video editing)
or buffering write bursts (e.g. database servers).


 Just how expensive is a high-performance 50GB RAM disk system?


Expect to pay ~$2 per MB on the cheap end of things.  Or for 50GB, about
$100k.  Using cheap machines maxed with RAM and RDMA fabrics or similar
is a cheaper way to do this for roughly equivalent performance, but you
won't be able to do this on Windows -- this is one of those arenas where
Linux excels.

cheers,

j. andrew rogers

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Shane
Peter,
In terms of hardware as far as I know the biggest PC style hardware
you can buy supports 32 GB of RAM.  For example you can by PCs this
big from people like www.penguincomputing.com
However there isn't a 64 bit version of Windows on the market nor will
there be for some time.  Thus your only option is to run something
like Linux if you want to have all this data being accessed by code
directly in RAM.  Given that your core system is C# this could be a
bit of a problem.  I think this is perhaps the first issue you would
need to sort out before thinking about hardware --- the hardware
exists alright but with Windows you can't make use of it, at least in
terms of having it directly in RAM.
As for more indirect solutions like RAM disks... I think you would
loose at least a factor of ten in speed compared to simply accessing
system memory directly as you would need to go through the file system
and out to an external device with RAM in it pretending to be a disk.
If I remember correctly the fastest disk interfaces you can get for a
PC are in the order of 100 MB per second while system RAM is more like
a few GB per second.  So if you need really high speed then RAM is
perhaps your only option I think.
Shane
Peter Voss wrote:
What are the best options for large amounts of fast memory for Windows-based
systems?
I'm looking for price  performance (access time) for:
1) Cached RAID
2) RAM disks
3) Internal RAM (using 64 bit architecture?)
4) other
Thanks for any info.
Peter
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Peter Voss
Microsoft Updates 64-bit Windows XP Preview Editions

http://www.eweek.com/article2/0,1759,1637471,00.asp

Peter



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
Shane

. However there isn't a 64 bit version of Windows on the market nor will
there be for some time.  Thus your only option is to run something like
Linux if you want to have all this data being accessed by code directly in
RAM



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.740 / Virus Database: 494 - Release Date: 8/16/2004

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] SOTA 50 GB memory for Windows

2004-08-20 Thread Lukasz Kaiser
Hi.

 Given that your core system is C# this could be a bit of a problem. 

Just to put my 2c, if you should have the idea to try .NET under linux, 
better first do some tests. In my experience the linux .NET runtime (mono),
although almost fully compatible, is about 5-10 time slower than that on 
Windows and does not handle over 1GB of memory even if it is there 
(on 32 bit system at least).

Does anyone know how flash drives perform in such setting, can they
be used as a slower alternative to RAM ?

- lk

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]