Re: [agi] AI on TV

2002-12-09 Thread maitri



Ben,

I just read the Bio. You gave alot more play 
to his ideas than the show did. You probably know this, but Starlab has 
folded and I think he was off to the states...

The show seemed to indicate that nothing of note 
ever came out of the project. In fact, it appeared to not generate one new 
network . What they didn't detail was the cause of this. It could 
have ben hardware related, I don't know. They were also having serious 
contract problems with the Russian fellow who built it. He had effectively 
disabled the machine from the US until he got some more money, which eventually 
killed the whole thing. What a waste. Maybe you can buy the machine 
off Ebay now. They said it would be auctioned...

They did give alot of play to his seemingly 
contrarion ideas about the implications of his work. It was a rather 
dismal outlook on societies lack of general acceptance of AI and\or 
enhancement. I hope he was off base in this area, but I wouldn't be 
surprised if a small group of radical anti-AI people emerge with hostile 
intent. Another good reason to not be so visible!!

Kevin

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Monday, December 09, 2002 11:26 
  AM
  Subject: RE: [agi] AI on TV
  
  
  
There was a show on the tube last night on 
TechTV. It was part of their weekly Secret, Strange and True 
series. They chronicled three guys who are working on creating 
advanced AI.

One guy was from Belgium. My 
apologies to him if he reads this list, but he was a rather quirky and 
stressed character. He had designed a computer that was basically a 
collection of chips. He raised a million and had it built on 
spec. I gather he was expecting something to miraculously emerge from 
this collection, but alas, nothing did. It was really stressful 
watching his stress. He had very high visibility in the country and 
the pressure was immense as he promised a lot. I have real doubts 
about his approach, even though I am a lay-AI person. Also, its clear 
from watching him that its sometimes good to have shoestring budgets and low 
visibility. Less stress and more forced creativity in your 
approach...

Kevin: Was the guy from Belgium perhaps Hugo de 
Garis?? [Whois not in Belgium anymore, but who designed a 
radical hardware based approach to AGI, and who is a bit of a quirky guy?? 
...]

I visited Hugo at Starlab [when it existed] in Brussels 
inmid-2001

See my brief bio of Hugo at

http://www.goertzel.org/benzine/deGaris.htm


-- Ben 
G


RE: [agi] AI on TV

2002-12-09 Thread Gary Miller
Title: Message



On Dec. 9 Kevin 
said:

"It seems to me that building a strictly "black 
box" AGI that only uses text or graphical input\output can have tremendous 
implications for our society, even without arms and eyes and ears, etc. 
Almost anything can be designed or contemplated within a computer, so the need 
for dealing with analog input seems unnecessary to me. Eventually, these 
will be needed to have a complete, human like AI. It may even be better 
that these first AGI systems will not have vision and hearing because it will 
make it more palatable and less threatening to the masses"

I agree 
wholeheartedly. Sony and Honda as well as several military contractors are 
spending 10s perhaps hundreds of million dollars on RD robotics 
programs which incorporate the vision, and analog control, and data acquisition 
for industry, the military, and yes even the toy companies. 


Once 
AGIs are ready to fly it will be able to interface with these systems through 
software APIs (Application Programming Interfaces) and will not even care about 
the low-level programs that enable them move about and visually survey their 
environments.

Too 
often those who seek the spotlight are really sincere, but either need 
recognition for their own self reassurance or as a method of attracting 
potential funding.

There 
seems to be an unwritten law in the universe which that says all major 
inventions will involve major sacrifice and loss for those who dare to tackle 
what has been deemed impossible by others. From Galileo to Edison, to 
Tesla, to maybe one of us. Before we succeed, ifwe succeed, the 
universe will exact it's toll. For nature will not give up her secrets 
willingly and intelligence may be her most closely guarded secret of 
all!

Don't 
forget that genius and madness sometimes walk arm in arm! 


And as 
the man says if you weren't cazy when you got in, you probably will be before 
you get out!.

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  maitriSent: Monday, December 09, 2002 11:08 AMTo: 
  [EMAIL PROTECTED]Subject: [agi] AI on TV
  There was a show on the tube last night on 
  TechTV. It was part of their weekly Secret, Strange and True 
  series. They chronicled three guys who are working on creating advanced 
  AI.
  
  One guy was from Belgium. My apologies to 
  him if he reads this list, but he was a rather quirky and stressed 
  character. He had designed a computer that was basically a collection of 
  chips. He raised a million and had it built on spec. I gather he 
  was expecting something to miraculously emerge from this collection, but alas, 
  nothing did. It was really stressful watching his stress. He had 
  very high visibility in the country and the pressure was immense as he 
  promised a lot. I have real doubts about his approach, even though I am 
  a lay-AI person. Also, its clear from watching him that its sometimes 
  good to have shoestring budgets and low visibility. Less stress and more 
  forced creativity in your approach...
  
  The second guy was from either England or the 
  states, not sure. He was working out of his garage with his wife. 
  He was trying to develop robot AI including vision, speech, hearing and 
  movement. He was clearly floundering as he radically redesigned what he 
  was doing probably a dozen times during the 1 hour show. I think this 
  experimentation has value. But I really wonder if large scale trial and 
  error will result in AGI. I don't think so. I think trial and 
  error will, of course, be essential during development, but T and E of the 
  entire underlying architecture seems a folly to me. Since the problem is 
  SO immense, I believe one must start with a very sound and detailed game plan 
  that can be tweaked as things move along.
  
  The last guy was brooks at MIT. They were 
  developing a robot with enhanced vision capabilities. They also failed 
  miserably. I am rather glad that they did. They re funded by DOD, and 
  are basically trying to build a robotic killing machine. Just what we 
  need.
  
  It seems to me that trying to tackle the vision 
  problem is too big of a place to start. While all this work will have 
  value down the line, is it essential to AGI? It seems to me that 
  building a strictly "black box" AGI that only uses text or graphical 
  input\output can have tremendous implications for our society, even without 
  arms and eyes and ears, etc. Almost anything can be designed or 
  contemplated within a computer, so the need for dealing with analog input 
  seems unnecessary to me. Eventually, these will be needed to have a 
  complete, human like AI. It may even be better that these first AGI 
  systems will not have vision and hearing because it will make it more 
  palatable and less threatening to the masses
  
  The show was rather discouraging, especially if 
  one considers that these three folks are leading the way towards 

RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel




I was 
at Starlab one week after it folded. Hugo was the only one left 
there -- he was living in an apartment in the building. It was a huge, 
beautiful, ancient, building, formerly the Czech Embassy to Brussels I 
saw the CAM-Brain machine (CBM) there, disabled by Korkin (the maker) due to 
non-payment...

There 
is a CBM in use at ATR in Japan [where Hugo used to work], but it's mostly being 
used for simple hardware-type experiments, not advanced 
learning...

; 
there was one at Lernout-Hauspie, but I don't know what became of it when that 
firm went under...

Hugo 
is currently designing the CBM-2, and I've given him some possibly useful ideas 
in that regard...

I can 
sympathize somewhat with Korkin: he spent his own $$ on the hardware, and then 
starlab did not pay him, breaking its contractual obligations. He is 
struggling financially. And Hugo was not at all politic or sympathetic in 
dealing with him, because Hugo is always so wrapped up in his own 
problems. Well, such is human life I tried briefly to help 
smooth things over w/ Korkin, but Hugo's attitude was sufficiently out-there 
that it was not possible...

-- 
Ben

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of maitriSent: 
  Monday, December 09, 2002 11:44 AMTo: 
  [EMAIL PROTECTED]Subject: Re: [agi] AI on TV
  Ben,
  
  I just read the Bio. You gave alot more 
  play to his ideas than the show did. You probably know this, but Starlab 
  has folded and I think he was off to the states...
  
  The show seemed to indicate that nothing of note 
  ever came out of the project. In fact, it appeared to not generate one 
  new network . What they didn't detail was the cause of this. It 
  could have ben hardware related, I don't know. They were also having 
  serious contract problems with the Russian fellow who built it. He had 
  effectively disabled the machine from the US until he got some more money, 
  which eventually killed the whole thing. What a waste. Maybe you 
  can buy the machine off Ebay now. They said it would be 
  auctioned...
  
  They did give alot of play to his seemingly 
  contrarion ideas about the implications of his work. It was a rather 
  dismal outlook on societies lack of general acceptance of AI and\or 
  enhancement. I hope he was off base in this area, but I wouldn't be 
  surprised if a small group of radical anti-AI people emerge with hostile 
  intent. Another good reason to not be so visible!!
  
  Kevin
  
- Original Message - 
From: 
Ben Goertzel 

To: [EMAIL PROTECTED] 
Sent: Monday, December 09, 2002 11:26 
AM
Subject: RE: [agi] AI on TV



  There was a show on the tube last night on 
  TechTV. It was part of their weekly Secret, Strange and True 
  series. They chronicled three guys who are working on creating 
  advanced AI.
  
  One guy was from Belgium. My 
  apologies to him if he reads this list, but he was a rather quirky and 
  stressed character. He had designed a computer that was basically a 
  collection of chips. He raised a million and had it built on 
  spec. I gather he was expecting something to miraculously emerge 
  from this collection, but alas, nothing did. It was really stressful 
  watching his stress. He had very high visibility in the country and 
  the pressure was immense as he promised a lot. I have real doubts 
  about his approach, even though I am a lay-AI person. Also, its 
  clear from watching him that its sometimes good to have shoestring budgets 
  and low visibility. Less stress and more forced creativity in your 
  approach...
  
  Kevin: Was the guy from Belgium perhaps Hugo de 
  Garis?? [Whois not in Belgium anymore, but who designed a 
  radical hardware based approach to AGI, and who is a bit of a quirky guy?? 
  ...]
  
  I visited Hugo at Starlab [when it existed] in Brussels 
  inmid-2001
  
  See my brief bio of Hugo 
at
  
  http://www.goertzel.org/benzine/deGaris.htm
  
  
  -- Ben 
  G


Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
maitri wrote:

 
The second guy was from either England or the states, not sure.  He was 
working out of his garage with his wife.  He was trying to develop robot 
AI including vision, speech, hearing and movement.

This one's a bit more difficult, Steve Grand perhaps?

http://www.cyberlife-research.com/people/steve/

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
Gary Miller wrote:

On Dec. 9 Kevin said:
 
It seems to me that building a strictly black box AGI that only uses 
text or graphical input\output can have tremendous implications for our 
society, even without arms and eyes and ears, etc.  Almost anything can 
be designed or contemplated within a computer, so the need for dealing 
with analog input seems unnecessary to me.  Eventually, these will be 
needed to have a complete, human like AI.  It may even be better that 
these first AGI systems will not have vision and hearing because it will 
make it more palatable and less threatening to the masses

My understanding is that this current trend came about as follows:

Classical AI system where either largely disconnected from the physical
world or lived strictly in artificial mirco worlds.  This lead to a
number of problems including the famous symbol grounding problem where
the agent's symbols lacked any grounding in an external reality.  As a 
reaction to these problems many decided that AI agents needed to be
more grounded in the physical world, embodiment as they call it.

Some now take this to an extreme and think that you should start with
robotic and sensory and control stuff and forget about logic and what
thinking is and all that sort of thing.  This is what you see now in
many areas of AI research, Brooks and the Cog project at MIT being
one such example.

Shane


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI on TV

2002-12-09 Thread maitri
that's him...



- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 3:43 PM
Subject: Re: [agi] AI on TV


 maitri wrote:
 
  The second guy was from either England or the states, not sure.  He was
  working out of his garage with his wife.  He was trying to develop robot
  AI including vision, speech, hearing and movement.

 This one's a bit more difficult, Steve Grand perhaps?

 http://www.cyberlife-research.com/people/steve/

 Shane

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread maitri
I don't want to underestimate the value of embodiment for an AI system,
especially for the development of consciousness.  But this is just my
opinion...

As far as a very useful AGI, I don't see the necessity of a body or sensory
inputs beyond textual input.  Almost any form can be represented as
mathematical models that can easily be input to the system in that manner.
I'm sure there are others on this list that have thought a lot more about
this than I have..

Kevin

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 4:18 PM
Subject: Re: [agi] AI on TV


 Gary Miller wrote:
  On Dec. 9 Kevin said:
 
  It seems to me that building a strictly black box AGI that only uses
  text or graphical input\output can have tremendous implications for our
  society, even without arms and eyes and ears, etc.  Almost anything can
  be designed or contemplated within a computer, so the need for dealing
  with analog input seems unnecessary to me.  Eventually, these will be
  needed to have a complete, human like AI.  It may even be better that
  these first AGI systems will not have vision and hearing because it will
  make it more palatable and less threatening to the masses

 My understanding is that this current trend came about as follows:

 Classical AI system where either largely disconnected from the physical
 world or lived strictly in artificial mirco worlds.  This lead to a
 number of problems including the famous symbol grounding problem where
 the agent's symbols lacked any grounding in an external reality.  As a
 reaction to these problems many decided that AI agents needed to be
 more grounded in the physical world, embodiment as they call it.

 Some now take this to an extreme and think that you should start with
 robotic and sensory and control stuff and forget about logic and what
 thinking is and all that sort of thing.  This is what you see now in
 many areas of AI research, Brooks and the Cog project at MIT being
 one such example.

 Shane


 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Pei Wang
I have a paper
(http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/#semantics) on this
topic, which is mostly in agreement with what Kevin said.

For an intelligent system, it is important for its concepts and beliefs to
be grounded on the system's experience, but such experience can be textual.
Of course, sensorimotor experience is richer, but it is not fundamentally
different from textual experience.

Pei

- Original Message -
From: maitri [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 5:52 PM
Subject: Re: [agi] AI on TV


 I don't want to underestimate the value of embodiment for an AI system,
 especially for the development of consciousness.  But this is just my
 opinion...

 As far as a very useful AGI, I don't see the necessity of a body or
sensory
 inputs beyond textual input.  Almost any form can be represented as
 mathematical models that can easily be input to the system in that manner.
 I'm sure there are others on this list that have thought a lot more about
 this than I have..

 Kevin

 - Original Message -
 From: Shane Legg [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Monday, December 09, 2002 4:18 PM
 Subject: Re: [agi] AI on TV


  Gary Miller wrote:
   On Dec. 9 Kevin said:
  
   It seems to me that building a strictly black box AGI that only
uses
   text or graphical input\output can have tremendous implications for
our
   society, even without arms and eyes and ears, etc.  Almost anything
can
   be designed or contemplated within a computer, so the need for dealing
   with analog input seems unnecessary to me.  Eventually, these will be
   needed to have a complete, human like AI.  It may even be better that
   these first AGI systems will not have vision and hearing because it
will
   make it more palatable and less threatening to the masses
 
  My understanding is that this current trend came about as follows:
 
  Classical AI system where either largely disconnected from the physical
  world or lived strictly in artificial mirco worlds.  This lead to a
  number of problems including the famous symbol grounding problem where
  the agent's symbols lacked any grounding in an external reality.  As a
  reaction to these problems many decided that AI agents needed to be
  more grounded in the physical world, embodiment as they call it.
 
  Some now take this to an extreme and think that you should start with
  robotic and sensory and control stuff and forget about logic and what
  thinking is and all that sort of thing.  This is what you see now in
  many areas of AI research, Brooks and the Cog project at MIT being
  one such example.
 
  Shane
 
 
  ---
  To unsubscribe, change your address, or temporarily deactivate your
 subscription,
  please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Alan Grimes
Ben Goertzel wrote: 
 This is not a matter of principle, it's a matter of pragmatics  I 
 think that a perceptual-motor domain in which a variety of cognitively 
 simple patterns are simply expressed, will make world-grounded early 
 language learning much easier...

If anyone has the software for this, please tell me! =)

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Shane Legg

I think my position is similar to Ben's; it's not really what you
ground things in, but rather that you don't expose your limited
little computer brain to an environment that is too complex --
at least not to start with.  Language, even reasonably simple
context free languages, could well be too rich for a baby AI.
Trying to process 3D input is far too complex.  Better then to
start with something simple like 2D pixel patterns as Ben suggests.
The A2I2 project by Peter Voss is taking a similar approach.

Once very simple concepts and relations have been formed at this
level then I would expect an AI to be better able to start dealing
with richer things like basic language using what it learned
previously as a starting point.  For example, relating simple
patterns of language that have an immediate and direct relation
to the visual environment to start with and slowly building up
from there.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI on TV

2002-12-09 Thread Pei Wang
On this issue, we can distinguish 4 approaches:

(1) let symbols get their meaning through interpretation (provided in
another language) --- this is the approach used in traditional symbolic AI.

(2) let symbols get their meaning by grounding on textual experience ---
this is what I and Kevin suggested.

(3) let symbols get their meaning by grounding on simplified perceptual
experience  --- this is what Ben and Shane suggested.

(4) let symbols get their meaning by grounding on human-level perceptual
experience --- this is what Brooks (the robotics researcher at MIT) and
Harnad (who raised the symbol grounding issue in the first place)
proposed.

My opinion is: in principle, the approach (1) doesn't work well for AI,
while the last 3 approaches are in the same category.  Of course, the richer
the experience is, the more capable the system will be.  However, to
actually develop an AGI theory/system, I'd rather start with (2), and leave
(3) for the next step, and (4) for the future.   Therefore, though I
basically agree with what Ben and Shane said, I won't do that in NARS very
soon.

Pei

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 09, 2002 9:44 PM
Subject: Re: [agi] AI on TV



 I think my position is similar to Ben's; it's not really what you
 ground things in, but rather that you don't expose your limited
 little computer brain to an environment that is too complex --
 at least not to start with.  Language, even reasonably simple
 context free languages, could well be too rich for a baby AI.
 Trying to process 3D input is far too complex.  Better then to
 start with something simple like 2D pixel patterns as Ben suggests.
 The A2I2 project by Peter Voss is taking a similar approach.

 Once very simple concepts and relations have been formed at this
 level then I would expect an AI to be better able to start dealing
 with richer things like basic language using what it learned
 previously as a starting point.  For example, relating simple
 patterns of language that have an immediate and direct relation
 to the visual environment to start with and slowly building up
 from there.

 Shane

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]