Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Mike Tintner
Jim,

The importance of the point here is NOT primarily about AGI systems having to 
make this distinction. Yes, a real AGI robot will probably have to make this 
distinction as an infant does - but in terms of practicality, that's an awful 
long way away.

The importance is this:  real AGI is about dealing with a world of living 
creatures in a myriad ways - those living creatures, are all fundamentally 
unpredictable. Ergo most AGI activities and problems involve dealing with a 
fundamentally unpredictable world. 

Narrow AI - and all rational technology - and all attempts-at-AGI to date are 
predicated on dealing with a predictable world. (All the additions of 
probabilities and uncertainties to date do not change this basic assumption). 
All your personal logical and mathematical exercises are based on a predictable 
world. An AGI TSP equivalent for you would be what I already said - how would 
you deal with deciding a travel route to a set of *mobile*, *unpredictable* 
destinations?

This recognition of fundamental unpredictability totally transforms the way you 
look at the world - and the kind of problems you have to deal with - makes you 
aware of  the v. different, non-rational problems that real humans do deal with.

And BTW it doesn't really matter if you are a determinist - for the plain 
reality of life is that the only evidence we have is of living creatures and 
humans behaving unpredictably. There might for argument's sake be some divine 
determinist plan revealing the underlying laws of living behaviour - but it 
sure as heck ain't available to anyone (not to mention that it doesn't exist) 
and we have to proceed accordingly.




From: Jim Bromer 
Sent: Monday, June 28, 2010 5:20 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk wrote:


  Inanimate objects normally move  *regularly,* in *patterned*/*pattern* ways, 
and *predictably.*

  Animate objects normally move *irregularly*, * in *patchy*/*patchwork* ways, 
and *unbleedingpredictably* .


This presumption looks similar (in some profound way) to many of the 
presumptions that were tried in the early days of AI, partly because computers 
lacked memory and they were very slow.  It's unreliable just because we need 
the AGI program to be able to consider situations when, for example, inanimate 
objects move in patchy patchwork ways or in unpredictable patterns.

Jim Bromer
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Mike Tintner
Just off the cuff here - isn't the same true for vision? You can't learn vision 
from vision. Just as all NLP has no connection with the real world, and totally 
relies on the human programmer's knowledge of that world. 

Your visual program actually relies totally on your visual vocabulary - not 
its own. That is the inevitable penalty of processing unreal signals on a 
computer screen which are not in fact connected to the real world any more than 
the verbal/letter signals involved in NLP are.

What you need to do - what anyone in your situation with anything like your 
asprations needs to do - is to hook up with a roboticist. Everyone here should 
be doing that.



From: David Jones 
Sent: Tuesday, June 29, 2010 5:27 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


You can't learn language from language without embedding way more knowledge 
than is reasonable. Language does not contain the information required for its 
interpretation. There is no *reason* to interpret the language into any of the 
infinite possible interpretaions. There is nothing to explain but it requires 
explanatory reasoning to determine the correct real world interpretation


  On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:


  David Jones wrote:
   Natural language requires more than the words on the page in the real 
world. Of...

  Any knowledge that can be demonstrated over a text-only channel (as in the 
Turing test) can also be learned over a text-only channel.


   Cyc also is trying to store knowledge about a super complicated world in 
simplistic forms and al...

  Cyc failed because it lacks natural language. The vast knowledge store of the 
internet is unintelligible to Cyc. The average person can't use it because they 
don't speak Cycl and because they have neither the ability nor the patience to 
translate their implicit thoughts into augmented first order logic. Cyc's 
approach was understandable when they started in 1984 when they had neither the 
internet nor the vast computing power that is required to learn natural 
language from unlabeled examples like children do.


   Vision and other sensory interpretaion, on the other hand, do not require 
more info because that...

  Without natural language, your system will fail too. You don't have enough 
computing power to learn language, much less the million times more computing 
power you need to learn to see.


   
  -- Matt Mahoney, matmaho...@yahoo.com




  
  From: David Jones davidher...@gmail.com
  To: agi a...@v2.listbox.c...

  Sent: Mon, June 28, 2010 9:28:57 PM 

  Subject: Re: [agi] A Primary Distinction for an AGI


  Natural language requires more than the words on the page in the real world. 
Of course that didn't ...

agi | Archives  | Modify Your Subscription   


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
Mike,

THIS is the flawed reasoning that causes people to ignore vision as the
right way to create AGI. And I've finally come up with a great way to show
you how wrong this reasoning is.

I'll give you an extremely obvious argument that proves that vision requires
much less knowledge to interpret than language does. Let's say that you have
never been to egypt, you have never seen some particular movie before.  But
if you see the movie, an alien landscape, an alien world, a new place or any
such new visual experience, you can immediately interpret it in terms of
spacial, temporal, compositional and other relationships.

Now, go to egypt and listen to them speak. Can you interpret it? Nope. Why?!
Because you don't have enough information. The language itself does not
contain any information to help you interpret it. We do not learn language
simply by listening. We learn based on evidence from how the language is
used and how it occurs in our daily lives. Without that experience, you
cannot interpret it.

But with vision, you do not need extra knowledge to interpret a new
situation. You can recognize completely new objects without any training
except for simply observing them in their natural state.

I wish people understood this better.

Dave

On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Just off the cuff here - isn't the same true for vision? You can't learn
 vision from vision. Just as all NLP has no connection with the real world,
 and totally relies on the human programmer's knowledge of that world.

 Your visual program actually relies totally on your visual vocabulary -
 not its own. That is the inevitable penalty of processing unreal signals on
 a computer screen which are not in fact connected to the real world any more
 than the verbal/letter signals involved in NLP are.

 What you need to do - what anyone in your situation with anything like your
 asprations needs to do - is to hook up with a roboticist. Everyone here
 should be doing that.


  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 5:27 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

 You can't learn language from language without embedding way more knowledge
 than is reasonable. Language does not contain the information required for
 its interpretation. There is no *reason* to interpret the language into any
 of the infinite possible interpretaions. There is nothing to explain but it
 requires explanatory reasoning to determine the correct real world
 interpretation

 On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  David Jones wrote:
  Natural language requires more than the words on the page in the real
 world. Of...
 Any knowledge that can be demonstrated over a text-only channel (as in the
 Turing test) can also be learned over a text-only channel.



  Cyc also is trying to store knowledge about a super complicated world in
 simplistic forms and al...
 Cyc failed because it lacks natural language. The vast knowledge store of
 the internet is unintelligible to Cyc. The average person can't use it
 because they don't speak Cycl and because they have neither the ability nor
 the patience to translate their implicit thoughts into augmented first order
 logic. Cyc's approach was understandable when they started in 1984 when they
 had neither the internet nor the vast computing power that is required to
 learn natural language from unlabeled examples like children do.



  Vision and other sensory interpretaion, on the other hand, do not require
 more info because that...
 Without natural language, your system will fail too. You don't have enough
 computing power to learn language, much less the million times more
 computing power you need to learn to see.




 -- Matt Mahoney, matmaho...@yahoo.com

  
 From: David Jones davidher...@gmail.com
 To: agi a...@v2.listbox.c...
 *Sent:* Mon, June 28, 2010 9:28:57 PM


 Subject: Re: [agi] A Primary Distinction for an AGI


 Natural language requires more than the words on the page in the real
 world. Of course that didn't ...
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Matt Mahoney
David Jones wrote:
 I wish people understood this better.

For example, animals can be intelligent even though they lack language because 
they can see. True, but an AGI with language skills is more useful than one 
without.

And yes, I realize that language, vision, motor skills, hearing, and all the 
other senses and outputs are tied together. Skills in any area make learning 
the others easier.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 1:42:51 PM
Subject: Re: [agi] A Primary Distinction for an AGI

Mike, 

THIS is the flawed reasoning that causes people to ignore vision as the right 
way to create AGI. And I've finally come up with a great way to show you how 
wrong this reasoning is. 

I'll give you an extremely obvious argument that proves that vision requires 
much less knowledge to interpret than language does. Let's say that you have 
never been to egypt, you have never seen some particular movie before.  But if 
you see the movie, an alien landscape, an alien world, a new place or any such 
new visual experience, you can immediately interpret it in terms of spacial, 
temporal, compositional and other relationships. 

Now, go to egypt and listen to them speak. Can you interpret it? Nope. Why?! 
Because you don't have enough information. The language itself does not contain 
any information to help you interpret it. We do not learn language simply by 
listening. We learn based on evidence from how the language is used and how it 
occurs in our daily lives. Without that experience, you cannot interpret it.

But with vision, you do not need extra knowledge to interpret a new situation. 
You can recognize completely new objects without any training except for simply 
observing them in their natural state. 

I wish people understood this better.

Dave


On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner tint...@blueyonder.co.uk wrote:





Just off the cuff here - isn't the same true for 
vision? You can't learn vision from vision. Just as all NLP has no connection 
with the real world, and totally relies on the human programmer's knowledge of 
that world. 
 
Your visual program actually relies totally on your 
visual vocabulary - not its own. That is the inevitable penalty of 
processing 
unreal signals on a computer screen which are not in fact connected to the 
real world any more than the verbal/letter signals involved in NLP 
are.
 
What you need to do - what anyone in your situation 
with anything like your asprations needs to do - is to hook up with a 
roboticist. Everyone here should be doing that.
 


From: David Jones 
Sent: Tuesday, June 29, 2010 5:27 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an 
AGI


You can't learn language from language without embedding way more knowledge 
than is reasonable. Language does not contain the information required for its 
interpretation. There is no *reason* to interpret the language into any of the 
infinite possible interpretaions. There is nothing to explain but it requires 
explanatory reasoning to determine the correct real world interpretation
On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:


David Jones wrote:
 Natural language 
  requires more than the words on the page in the real world. Of...
Any knowledge that can be demonstrated over a 
  text-only channel (as in the Turing test) can also be learned over a 
 text-only 
  channel.


 Cyc also is trying to store knowledge 
  about a super complicated world in simplistic forms and al...
Cyc failed because it lacks natural language. The vast knowledge 
  store of the internet is unintelligible to Cyc. The average person can't 
  use it because they don't speak Cycl and because they have neither the 
 ability 
  nor the patience to translate their implicit thoughts into augmented first 
  order logic. Cyc's approach was understandable when they started in 1984 
 when 
  they had neither the internet nor the vast computing power that is required 
 to 
  learn natural language from unlabeled examples like children do.


 Vision and other sensory interpretaion, on 
  the other hand, do not require more info because that...
Without natural language, your system will fail too. You don't have 
  enough computing power to learn language, much less the million times more 
  computing power you need to learn to see.


 
-- Matt Mahoney, matmaho...@yahoo.com




From: David Jones 
  davidher...@gmail.com
To: agi 
  a...@v2.listbox.c...sent: Mon, June 28, 2010 9:28:57 PM 
 

Subject: Re: [agi] A Primary Distinction for an 
  AGI 

Natural language requires more than the words on 
  the page in the real world. Of course that didn't ...
agi | Archives  | Modify Your Subscription   
agi | Archives  | 
 Modify   Your Subscription   

agi | Archives   | Modify  Your Subscription  

agi | Archives  | Modify Your Subscription

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
The point I was trying to make is that an approach that tries to interpret
language just using language itself and without sufficient information or
the means to realistically acquire that information, *should* fail.

On the other hand, an approach that tries to interpret vision with minimal
upfront knowledge needs *should* succeed because the knowledge required to
automatically learn to interpret images is amenable to preprogramming. In
addition, such knowledge must be pre-programmed. The knowledge for
interpreting language though should not be pre-programmed.

Dave

On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 David Jones wrote:
  I wish people understood this better.

 For example, animals can be intelligent even though they lack language
 because they can see. True, but an AGI with language skills is more useful
 than one without.

 And yes, I realize that language, vision, motor skills, hearing, and all
 the other senses and outputs are tied together. Skills in any area make
 learning the others easier.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 1:42:51 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 Mike,

 THIS is the flawed reasoning that causes people to ignore vision as the
 right way to create AGI. And I've finally come up with a great way to show
 you how wrong this reasoning is.

 I'll give you an extremely obvious argument that proves that vision
 requires much less knowledge to interpret than language does. Let's say that
 you have never been to egypt, you have never seen some particular movie
 before.  But if you see the movie, an alien landscape, an alien world, a new
 place or any such new visual experience, you can immediately interpret it in
 terms of spacial, temporal, compositional and other relationships.

 Now, go to egypt and listen to them speak. Can you interpret it? Nope.
 Why?! Because you don't have enough information. The language itself does
 not contain any information to help you interpret it. We do not learn
 language simply by listening. We learn based on evidence from how the
 language is used and how it occurs in our daily lives. Without that
 experience, you cannot interpret it.

 But with vision, you do not need extra knowledge to interpret a new
 situation. You can recognize completely new objects without any training
 except for simply observing them in their natural state.

 I wish people understood this better.

 Dave

 On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Just off the cuff here - isn't the same true for vision? You can't learn
 vision from vision. Just as all NLP has no connection with the real world,
 and totally relies on the human programmer's knowledge of that world.

 Your visual program actually relies totally on your visual vocabulary -
 not its own. That is the inevitable penalty of processing unreal signals on
 a computer screen which are not in fact connected to the real world any more
 than the verbal/letter signals involved in NLP are.

 What you need to do - what anyone in your situation with anything like
 your asprations needs to do - is to hook up with a roboticist. Everyone here
 should be doing that.


  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 5:27 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

 You can't learn language from language without embedding way more
 knowledge than is reasonable. Language does not contain the information
 required for its interpretation. There is no *reason* to interpret the
 language into any of the infinite possible interpretaions. There is nothing
 to explain but it requires explanatory reasoning to determine the correct
 real world interpretation

 On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  David Jones wrote:
  Natural language requires more than the words on the page in the real
 world. Of...
 Any knowledge that can be demonstrated over a text-only channel (as in the
 Turing test) can also be learned over a text-only channel.



  Cyc also is trying to store knowledge about a super complicated world in
 simplistic forms and al...
 Cyc failed because it lacks natural language. The vast knowledge store of
 the internet is unintelligible to Cyc. The average person can't use it
 because they don't speak Cycl and because they have neither the ability nor
 the patience to translate their implicit thoughts into augmented first order
 logic. Cyc's approach was understandable when they started in 1984 when they
 had neither the internet nor the vast computing power that is required to
 learn natural language from unlabeled examples like children do.



  Vision and other sensory interpretaion, on the other hand, do not
 require more info because that...
 Without natural language, your system

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Matt Mahoney
Experiments in text compression show that text alone is sufficient for learning 
to predict text.

I realize that for a machine to pass the Turing test, it needs a visual model 
of the world. Otherwise it would have a hard time with questions like what 
word in this ernai1 did I spell wrong? Obviously the easiest way to build a 
visual model is with vision, but it is not the only way.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 3:22:33 PM
Subject: Re: [agi] A Primary Distinction for an AGI

I certainly agree that the techniques and explanation generating algorithms for 
learning language are hard coded into our brain. But, those techniques alone 
are not sufficient to learn language in the absence of sensory perception or 
some other way of getting the data required.

Dave


On Tue, Jun 29, 2010 at 3:19 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:
  The knowledge for interpreting language though should not be 
 pre-programmed. 


I think that human brains are wired differently than other animals to make 
language learning easier. We have not been successful in training other 
primates to speak, even though they have all the right anatomy such as vocal 
chords, tongue, lips, etc. When primates have been taught sign language, they 
have not successfully mastered forming sentences.

 -- Matt Mahoney, matmaho...@yahoo.com






From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 3:00:09 PM

Subject: Re: [agi] A Primary Distinction for an AGI


The point I was trying to make is that an approach that tries to interpret 
language just using language itself and without sufficient information or the 
means to realistically acquire that information, *should* fail. 

On the other hand, an approach that tries to interpret vision with minimal 
upfront knowledge needs *should* succeed because the knowledge required to 
automatically learn to interpret images is amenable to preprogramming. In 
addition, such knowledge must be pre-programmed. The knowledge for 
interpreting language though should not be pre-programmed. 

Dave


On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:
 I wish people understood this better.


For example, animals can be intelligent even though they lack language 
because they can see. True, but an AGI with language skills is more useful 
than one without.


And yes, I realize that language, vision, motor skills, hearing, and all the 
other senses and outputs are tied together. Skills in any area make learning 
the others easier.

 -- Matt Mahoney, matmaho...@yahoo.com






From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 1:42:51 PM


Subject: Re: [agi] A Primary Distinction for an AGI


Mike, 

THIS is the flawed reasoning that causes people to ignore vision as the right 
way to create AGI. And I've finally come up with a great way to show you how 
wrong this reasoning is. 

I'll give you an extremely obvious argument that proves that vision requires 
much less knowledge to interpret than language does. Let's say that you have 
never been to egypt, you have never seen some particular movie before.  But
 if you see the movie, an alien landscape, an alien world, a new place or any 
 such new visual experience, you can immediately interpret it in terms of 
 spacial, temporal, compositional and other relationships. 

Now, go to egypt and listen to them speak. Can you interpret it? Nope. Why?! 
Because you don't have enough information. The language itself does not 
contain any information to help you interpret it. We do not learn language 
simply by listening. We learn based on evidence from how the language is used 
and how it occurs in our daily lives. Without that experience, you cannot 
interpret it.

But with vision, you do not need extra knowledge to interpret a new 
situation. You can recognize completely new objects without any training 
except for simply observing them in their natural state. 

I wish people understood this better.

Dave


On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner tint...@blueyonder.co.uk 
wrote:





Just off the cuff here - isn't the same true for 
vision? You can't learn vision from vision. Just as all NLP has no 
connection 
with the real world, and totally relies on the human programmer's knowledge 
of 
that world. 
 
Your visual program actually relies totally on your 
visual vocabulary - not its own. That is the inevitable penalty of 
processing 
unreal signals on a computer screen which are not in fact connected to the 
real world any more than the verbal/letter signals involved in NLP 
are.
 
What you need to do - what anyone in your situation 
with anything like your asprations needs to do - is to hook up with a 
roboticist

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
the purpose of text is to convey something. It has to be interpreted. who
cares about predicting the next word if you can't interpret a single bit of
it.

On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.com wrote:

 People do not predict the next words of text. We anticipate it, but when
 something different shows up, we accept it if it is *explanatory*. Using
 compression like algorithms though will never be able to do this type of
 explanatory reasoning, which is required to disambiguate text. It is
 certainly not sufficient for learning language, which is not at all about
 predicting text.


 On Tue, Jun 29, 2010 at 3:38 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Experiments in text compression show that text alone is sufficient for
 learning to predict text.

 I realize that for a machine to pass the Turing test, it needs a visual
 model of the world. Otherwise it would have a hard time with questions like
 what word in this ernai1 did I spell wrong? Obviously the easiest way to
 build a visual model is with vision, but it is not the only way.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:22:33 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 I certainly agree that the techniques and explanation generating
 algorithms for learning language are hard coded into our brain. But, those
 techniques alone are not sufficient to learn language in the absence of
 sensory perception or some other way of getting the data required.

 Dave

 On Tue, Jun 29, 2010 at 3:19 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
   The knowledge for interpreting language though should not be
 pre-programmed.

 I think that human brains are wired differently than other animals to
 make language learning easier. We have not been successful in training other
 primates to speak, even though they have all the right anatomy such as vocal
 chords, tongue, lips, etc. When primates have been taught sign language,
 they have not successfully mastered forming sentences.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:00:09 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 The point I was trying to make is that an approach that tries to
 interpret language just using language itself and without sufficient
 information or the means to realistically acquire that information, *should*
 fail.

 On the other hand, an approach that tries to interpret vision with
 minimal upfront knowledge needs *should* succeed because the knowledge
 required to automatically learn to interpret images is amenable to
 preprogramming. In addition, such knowledge must be pre-programmed. The
 knowledge for interpreting language though should not be pre-programmed.

 Dave

 On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
  I wish people understood this better.

 For example, animals can be intelligent even though they lack language
 because they can see. True, but an AGI with language skills is more useful
 than one without.

 And yes, I realize that language, vision, motor skills, hearing, and all
 the other senses and outputs are tied together. Skills in any area make
 learning the others easier.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 1:42:51 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 Mike,

 THIS is the flawed reasoning that causes people to ignore vision as the
 right way to create AGI. And I've finally come up with a great way to show
 you how wrong this reasoning is.

 I'll give you an extremely obvious argument that proves that vision
 requires much less knowledge to interpret than language does. Let's say 
 that
 you have never been to egypt, you have never seen some particular movie
 before.  But if you see the movie, an alien landscape, an alien world, a 
 new
 place or any such new visual experience, you can immediately interpret it 
 in
 terms of spacial, temporal, compositional and other relationships.

 Now, go to egypt and listen to them speak. Can you interpret it? Nope.
 Why?! Because you don't have enough information. The language itself does
 not contain any information to help you interpret it. We do not learn
 language simply by listening. We learn based on evidence from how the
 language is used and how it occurs in our daily lives. Without that
 experience, you cannot interpret it.

 But with vision, you do not need extra knowledge to interpret a new
 situation. You can recognize completely new objects without any training
 except for simply observing them in their natural state.

 I wish people understood this better.

 Dave

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Mike Tintner
You're not getting where I'm coming from at all. I totally agree vision is far 
prior to language. (We and I've covered your points many times). That's not the 
point - wh. is that vision is nevertheless still vastly more complex, than you 
have any idea.

For one thing, vision depends on perceptualising/ conceptualising the world - a 
schematic ontology of the world - image-schematic. It almost certainly has to 
be done in a certain order, gradually built up.

No one in our culture has much idea of either what that ontology - a visual 
ontology - consists of, or how it's built up.

And for the most basic thing, you still haven't registered that your computer 
program has ZERO VISION. It's not actually looking at the world at all. It's 
BLIND - if you take the time to analyse it. A pretty fundamental error/ 
misconception.

Consequently, it also lacks a fundamental dimension of vision, wh. is 
POINT-OF-VIEW - distance of the visual medium (eg the retina) and viewing 
subject from the visual object. 

Get thee to a roboticist,  make contact with the real world.


From: David Jones 
Sent: Tuesday, June 29, 2010 6:42 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


Mike, 

THIS is the flawed reasoning that causes people to ignore vision as the right 
way to create AGI. And I've finally come up with a great way to show you how 
wrong this reasoning is. 

I'll give you an extremely obvious argument that proves that vision requires 
much less knowledge to interpret than language does. Let's say that you have 
never been to egypt, you have never seen some particular movie before.  But if 
you see the movie, an alien landscape, an alien world, a new place or any such 
new visual experience, you can immediately interpret it in terms of spacial, 
temporal, compositional and other relationships. 

Now, go to egypt and listen to them speak. Can you interpret it? Nope. Why?! 
Because you don't have enough information. The language itself does not contain 
any information to help you interpret it. We do not learn language simply by 
listening. We learn based on evidence from how the language is used and how it 
occurs in our daily lives. Without that experience, you cannot interpret it.

But with vision, you do not need extra knowledge to interpret a new situation. 
You can recognize completely new objects without any training except for simply 
observing them in their natural state. 

I wish people understood this better.

Dave


On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Just off the cuff here - isn't the same true for vision? You can't learn 
vision from vision. Just as all NLP has no connection with the real world, and 
totally relies on the human programmer's knowledge of that world. 

  Your visual program actually relies totally on your visual vocabulary - not 
its own. That is the inevitable penalty of processing unreal signals on a 
computer screen which are not in fact connected to the real world any more than 
the verbal/letter signals involved in NLP are.

  What you need to do - what anyone in your situation with anything like your 
asprations needs to do - is to hook up with a roboticist. Everyone here should 
be doing that.



  From: David Jones 
  Sent: Tuesday, June 29, 2010 5:27 PM
  To: agi 
  Subject: Re: [agi] A Primary Distinction for an AGI


  You can't learn language from language without embedding way more knowledge 
than is reasonable. Language does not contain the information required for its 
interpretation. There is no *reason* to interpret the language into any of the 
infinite possible interpretaions. There is nothing to explain but it requires 
explanatory reasoning to determine the correct real world interpretation


On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:


David Jones wrote:
 Natural language requires more than the words on the page in the real 
world. Of...

Any knowledge that can be demonstrated over a text-only channel (as in the 
Turing test) can also be learned over a text-only channel.


 Cyc also is trying to store knowledge about a super complicated world in 
simplistic forms and al...

Cyc failed because it lacks natural language. The vast knowledge store of 
the internet is unintelligible to Cyc. The average person can't use it because 
they don't speak Cycl and because they have neither the ability nor the 
patience to translate their implicit thoughts into augmented first order logic. 
Cyc's approach was understandable when they started in 1984 when they had 
neither the internet nor the vast computing power that is required to learn 
natural language from unlabeled examples like children do.


 Vision and other sensory interpretaion, on the other hand, do not require 
more info because that...

Without natural language, your system will fail too. You don't have enough 
computing power to learn language, much less the million times more computing 
power you

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
On Tue, Jun 29, 2010 at 3:33 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  You're not getting where I'm coming from at all. I totally agree vision
 is far prior to language. (We and I've covered your points many times).
 That's not the point - wh. is that vision is nevertheless still vastly more
 complex, than you have any idea.


whatever you say. That has nothing to do with whether it should be pursued
this way or not.



 For one thing, vision depends on perceptualising/ conceptualising the world
 - a schematic ontology of the world - image-schematic. It almost certainly
 has to be done in a certain order, gradually built up.


how is that, even remotely, a reason to change the way I do my research? It
doesn't even logically follow...



 No one in our culture has much idea of either what that ontology - a visual
 ontology - consists of, or how it's built up.


Again, how is that an argument for changing my research? It's not. It does
not follow again.



 And for the most basic thing, you still haven't registered that your
 computer program has ZERO VISION. It's not actually looking at the world at
 all. It's BLIND - if you take the time to analyse it. A pretty fundamental
 error/ misconception.


Not an argument again. It has nothing to do with whether my approach will or
will not provide the valuable knowledge and foundation required to solve the
fundamental problems of general vision.



 Consequently, it also lacks a fundamental dimension of vision, wh. is
 POINT-OF-VIEW - distance of the visual medium (eg the retina) and viewing
 subject from the visual object.



AGAIN. Not an argument against my approach. It simply doesn't logically
follow anything. How is having a point of view in example problems prove
that anything learned or developed isn't applicable to general vision?


 Get thee to a roboticist,  make contact with the real world.


Get yourself to a psychologist so that they can show you how flawed your
reasoning is. Fallacy upon fallacy. You are not in touch with reality.



  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 6:42 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

 Mike,

 THIS is the flawed reasoning that causes people to ignore vision as the
 right way to create AGI. And I've finally come up with a great way to show
 you how wrong this reasoning is.

 I'll give you an extremely obvious argument that proves that vision
 requires much less knowledge to interpret than language does. Let's say that
 you have never been to egypt, you have never seen some particular movie
 before.  But if you see the movie, an alien landscape, an alien world, a new
 place or any such new visual experience, you can immediately interpret it in
 terms of spacial, temporal, compositional and other relationships.

 Now, go to egypt and listen to them speak. Can you interpret it? Nope.
 Why?! Because you don't have enough information. The language itself does
 not contain any information to help you interpret it. We do not learn
 language simply by listening. We learn based on evidence from how the
 language is used and how it occurs in our daily lives. Without that
 experience, you cannot interpret it.

 But with vision, you do not need extra knowledge to interpret a new
 situation. You can recognize completely new objects without any training
 except for simply observing them in their natural state.

 I wish people understood this better.

 Dave

 On Tue, Jun 29, 2010 at 12:51 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Just off the cuff here - isn't the same true for vision? You can't learn
 vision from vision. Just as all NLP has no connection with the real world,
 and totally relies on the human programmer's knowledge of that world.

 Your visual program actually relies totally on your visual vocabulary -
 not its own. That is the inevitable penalty of processing unreal signals on
 a computer screen which are not in fact connected to the real world any more
 than the verbal/letter signals involved in NLP are.

 What you need to do - what anyone in your situation with anything like
 your asprations needs to do - is to hook up with a roboticist. Everyone here
 should be doing that.


  *From:* David Jones davidher...@gmail.com
 *Sent:* Tuesday, June 29, 2010 5:27 PM
 *To:* agi agi@v2.listbox.com
  *Subject:* Re: [agi] A Primary Distinction for an AGI

 You can't learn language from language without embedding way more
 knowledge than is reasonable. Language does not contain the information
 required for its interpretation. There is no *reason* to interpret the
 language into any of the infinite possible interpretaions. There is nothing
 to explain but it requires explanatory reasoning to determine the correct
 real world interpretation

 On Jun 29, 2010 10:58 AM, Matt Mahoney matmaho...@yahoo.com wrote:

  David Jones wrote:
  Natural language requires more than the words on the page in the real
 world. Of...
 Any

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread Matt Mahoney
Answering questions is the same problem as predicting the answers. If you can 
compute p(A|Q) where Q is the question (and previous context of the 
conversation) and A is the answer, then you can also choose an answer A from 
the same distribution. If p() correctly models human communication, then the 
response would be indistinguishable from a human in a Turing test.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 3:43:53 PM
Subject: Re: [agi] A Primary Distinction for an AGI

the purpose of text is to convey something. It has to be interpreted. who cares 
about predicting the next word if you can't interpret a single bit of it.


On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.com wrote:

People do not predict the next words of text. We anticipate it, but when 
something different shows up, we accept it if it is *explanatory*. Using 
compression like algorithms though will never be able to do this type of 
explanatory reasoning, which is required to disambiguate text. It is certainly 
not sufficient for learning language, which is not at all about predicting text.



On Tue, Jun 29, 2010 at 3:38 PM, Matt Mahoney matmaho...@yahoo.com wrote:


Experiments in text compression show that text alone is sufficient for 
learning to predict text.


I realize that for a machine to pass the Turing test, it needs a visual model 
of the world. Otherwise it would have a hard time with questions like what 
word in this ernai1 did I spell wrong? Obviously the easiest way to build a 
visual model is with vision, but it is not the only way.

 -- Matt Mahoney, matmaho...@yahoo.com






From: David Jones
 davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 3:22:33 PM

Subject: Re: [agi] A Primary Distinction for an AGI


I certainly agree that the techniques and explanation generating algorithms 
for learning language are hard coded into our brain. But, those techniques 
alone are not sufficient to learn language in the absence of sensory 
perception or some other way of getting the data required.

Dave


On Tue, Jun 29, 2010 at 3:19 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:
  The knowledge for interpreting language though should not be 
 pre-programmed. 


I think that human brains are wired differently than other animals to make 
language learning easier. We have not been successful in training other 
primates to speak, even though they have all the right anatomy such as vocal 
chords, tongue, lips, etc. When primates have been taught sign language, 
they have not successfully mastered forming sentences.

 -- Matt Mahoney, matmaho...@yahoo.com







From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 3:00:09 PM



Subject: Re: [agi] A Primary Distinction for an AGI


The point I was trying to make is that an approach that tries to interpret 
language just using language itself and without sufficient information or 
the means to realistically acquire that information, *should* fail. 

On the other hand, an approach that tries to interpret vision with 
minimal upfront knowledge needs *should* succeed because the knowledge 
required to automatically learn to interpret images is amenable to 
preprogramming. In addition, such knowledge must be pre-programmed. The 
knowledge for interpreting language though should not be pre-programmed. 

Dave


On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:
 I wish people understood this better.


For example, animals can be intelligent even though they lack language 
because they can see. True, but an AGI with language skills is more useful 
than one without.


And yes, I realize that language, vision, motor skills, hearing, and all 
the other senses and outputs are tied together. Skills in any area make 
learning the others easier.

 -- Matt Mahoney, matmaho...@yahoo.com






From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, June 29, 2010 1:42:51 PM




Subject: Re: [agi] A Primary Distinction for an AGI


Mike, 

THIS is the flawed reasoning that causes people to ignore vision as the 
right way to create AGI. And I've finally come up with a great way to show 
you how wrong this reasoning is. 

I'll give you an extremely obvious argument that proves that vision 
requires much less knowledge to interpret than language does. Let's say 
that you have never been to egypt, you have never seen some particular 
movie before.  But
 if you see the movie, an alien landscape, an alien world, a new place or 
 any such new visual experience, you can immediately interpret it in terms 
 of spacial, temporal, compositional and other relationships. 

Now, go to egypt and listen to them speak. Can you interpret it? Nope

Re: [agi] A Primary Distinction for an AGI

2010-06-29 Thread David Jones
Scratch my statement about it being useless :) It's useful, but no where
near sufficient for AGI like understanding.

On Tue, Jun 29, 2010 at 4:58 PM, David Jones davidher...@gmail.com wrote:

 notice how you said *context* of the conversation. The context is the real
 world, and is completely missing. You cannot model human communication
 using text alone. The responses you would get back would be exactly like
 eliza. Sure, it might be pleasing to someone that has never seen AI before,
 but its certainly not answering any questions.

 This reminds me of the Bing search engine commercials where people ask a
 question and get responses that include the words they asked about, but in a
 completely wrong context.

 Predicting the next word and understanding the question are completely
 different and cannot be solved the same way. In fact, predicting the next
 word is altogether useless (at least by itself) in my opinion.

 Dave


 On Tue, Jun 29, 2010 at 4:50 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Answering questions is the same problem as predicting the answers. If you
 can compute p(A|Q) where Q is the question (and previous context of the
 conversation) and A is the answer, then you can also choose an answer A from
 the same distribution. If p() correctly models human communication, then the
 response would be indistinguishable from a human in a Turing test.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:43:53 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 the purpose of text is to convey something. It has to be interpreted. who
 cares about predicting the next word if you can't interpret a single bit of
 it.

 On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.comwrote:

 People do not predict the next words of text. We anticipate it, but when
 something different shows up, we accept it if it is *explanatory*. Using
 compression like algorithms though will never be able to do this type of
 explanatory reasoning, which is required to disambiguate text. It is
 certainly not sufficient for learning language, which is not at all about
 predicting text.


 On Tue, Jun 29, 2010 at 3:38 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 Experiments in text compression show that text alone is sufficient for
 learning to predict text.

 I realize that for a machine to pass the Turing test, it needs a visual
 model of the world. Otherwise it would have a hard time with questions like
 what word in this ernai1 did I spell wrong? Obviously the easiest way to
 build a visual model is with vision, but it is not the only way.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:22:33 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 I certainly agree that the techniques and explanation generating
 algorithms for learning language are hard coded into our brain. But, those
 techniques alone are not sufficient to learn language in the absence of
 sensory perception or some other way of getting the data required.

 Dave

 On Tue, Jun 29, 2010 at 3:19 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
   The knowledge for interpreting language though should not be
 pre-programmed.

 I think that human brains are wired differently than other animals to
 make language learning easier. We have not been successful in training 
 other
 primates to speak, even though they have all the right anatomy such as 
 vocal
 chords, tongue, lips, etc. When primates have been taught sign language,
 they have not successfully mastered forming sentences.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, June 29, 2010 3:00:09 PM

 *Subject:* Re: [agi] A Primary Distinction for an AGI

 The point I was trying to make is that an approach that tries to
 interpret language just using language itself and without sufficient
 information or the means to realistically acquire that information, 
 *should*
 fail.

 On the other hand, an approach that tries to interpret vision with
 minimal upfront knowledge needs *should* succeed because the knowledge
 required to automatically learn to interpret images is amenable to
 preprogramming. In addition, such knowledge must be pre-programmed. The
 knowledge for interpreting language though should not be pre-programmed.

 Dave

 On Tue, Jun 29, 2010 at 2:51 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 David Jones wrote:
  I wish people understood this better.

 For example, animals can be intelligent even though they lack language
 because they can see. True, but an AGI with language skills is more 
 useful
 than one without.

 And yes, I realize that language, vision, motor skills, hearing

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Mike,

Alive vs. dead? As I've said before, there is no actual difference. It is
not a qualitative difference that makes something alive or dead. It is a
quantitative difference. They are both controlled by physics. I don't mean
the nice clean physics rules that we approximate things with, I mean the
real dynamics of matter. Neither moves any more regularly or irregularly
than the other. It is harder to define why something alive moves because
the mechanism is normally too complex. If you didn't realize, there are life
forms that don't really move, such as viruses. Viruses are controlled by the
liquid that contains them. Yet, viruses are arguably alive. Some plants or
algae don't really move either. They may just grow in some direction, which
is not quite the same as movement.

Likewise, your analogy of this to AGI fails. You think there is a
difference, but there is none. You may think a fractal is more AGI than a
simple, low noise black square, but that is not the case. It is completely
besides the point. I can easily add noise to my experiments. I can simulate
the noise of light, camera lenses, blurring, etc. But, why should I when,
even without noise, there is a clear unsolved AGI challenge. The explanatory
reasoning required to solve even zero noise problems is still required for
full complexity problems. If you can't solve it for 2 squares on a screen,
what makes you think you can solve it for real images? Your grasp of reality
regarding AGI is quite poor, in my opinion.

Your main claim is that the problems I am working on are not representative
or applicable to AGI. But, you fail to see that they really are. The
abductive reasoning required to solve these extremely simplified problems is
required for every other AGI problem as well. These problems might be
solvable using methods that don't apply to AGI. But, that's why it is
important to force oneself to solve them in such a way that it IS applicable
to AGI. It doesn't mean that you have to choose a problem that is so hard
you can't cheat. It's unnecessary to do that unless you can't control your
desire to cheat. I can. Developing in this way, such as an implementation of
explanatory based reasoning, is very much applicable to AGI.

Dave

On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

 The recent Core of AGI exchange has led me IMO to a beautiful conclusion -
 to one of the most basic distinctions a real AGI system must make, and also
 a  simple way of distinguishing between narrow AI and real AGI projects of
 any kind.

 Consider - you have

 a) Dave's square moving across a screen

 b) my square moving across a screen

 (it was a sort-of-Pong-player line, but let's make it a square box).

 How do you distinguish which is animate or inanimate, alive or dead? A
 very early distinction an infant must make.

 Remember inanimate objects move (or are moved) too, and in this case you
 can only see them in motion,  - so the self-starting distinction is out.

 Well, obviously, if Dave's moves *regularly* (like a train or falling
 stone), it's probably inanimate. If mine moves *irregularly*, - if it stops
 and starts, or slows and accelerates in irregular, even if only subtly jerky
 fashion (like one operated by a human Pong player)  - it's probably
 inanimate. That's what distinguishes the movement of life.

 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .

 (IOW Newton is wrong - the laws of physics do not apply to living objects
 as whole objects  - that's the fundamental way we know they are living,
 because they visibly don't obey those laws - they don't normally move
 regularly like a stone falling to earth, or thrown through the sky. And
 we're v. impressed when humans like dancers or soldiers do manage by dint of
 great effort and practice to move with a high though not perfect degree of
 regularity and smoothness).

 And now we have such a simple way of distinguishing between narrow AI and
 real AGI projects. Look at their objects. The really narrow AI-er  will
 always do what Dave did - pick objects that are shaped regularly, move and
 behave regularly, are patterned, and predictable. Even  at as simple a level
 as plain old squares.

 And he'll pick closed, definable sets of objects.

 He'll do this instinctively, because he doesn't know any different - that's
 his intellectual, logicomathematical world - one of objects that no matter
 how complex (like fractals) are always regular in shape, movement,
 patterned, come in definable sets and are predictable.

 That's why Ben wants to see the world only as structured and patterned even
 though there's so much obvious mess and craziness everywhere - he's never
 known any different intellectually.

 That's why Michael can't bear to even contemplate a world in which things
 and people behave unpredictably. (And 

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .


This presumption looks similar (in some profound way) to many of the
presumptions that were tried in the early days of AI, partly because
computers lacked memory and they were very slow.  It's unreliable just
because we need the AGI program to be able to consider situations when, for
example, inanimate objects move in patchy patchwork ways or in unpredictable
patterns.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer
Well, I see that Mike did say normally move... so yes that type of
principle could be used in a more flexible AGI program (although there is
still a question about the use of any presumptions that go into this level
of detail about their reference subjects.  I would not use a primary
reference like Mike's in my AGI program just because it is so presumptuous
about animate and inanimate objects).  But anyway, my criticism then is that
the presumption is not really superior - in any way - to the run of the mill
presumptions that you often hear considered in discussions about AGI
programs.  For example, David never talked about distinguishing between
animate and inanimate objects (in the sense of the term 'animate' that Mike
is using the words,) and his reference was only made to an graphics example
to present the idea that he was talking about.
Jim Bromer

On Mon, Jun 28, 2010 at 12:20 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .


 This presumption looks similar (in some profound way) to many of the
 presumptions that were tried in the early days of AI, partly because
 computers lacked memory and they were very slow.  It's unreliable just
 because we need the AGI program to be able to consider situations when, for
 example, inanimate objects move in patchy patchwork ways or in unpredictable
 patterns.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Yeah. I forgot to mention that robots are not aalive yet could act
indistinguishably from what is alive. The concept of alive is likely
something that requires inductive type reasoning and generalization to
learn. Categorization, similarity analysis, etc could assist in making such
distinctions as well.

The point is that agi is not defined by any particular problem. It is
defined by how you solve problems, even simple ones. Which is why your claim
that my problems are not agi is simply wrong.

On Jun 28, 2010 12:22 PM, Jim Bromer jimbro...@gmail.com wrote:

On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:



 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
ways, and *predictably
This presumption looks similar (in some profound way) to many of the
presumptions that were tried in the early days of AI, partly because
computers lacked memory and they were very slow.  It's unreliable just
because we need the AGI program to be able to consider situations when, for
example, inanimate objects move in patchy patchwork ways or in unpredictable
patterns.

Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Jim Bromer

  On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .




I think you made a major tactical error and just got caught acting the way
you are constantly criticizing everyone else for acting.  --(Busted)--

You might say my interest is: how do we get a contemporary computer problem
to deal with situations in which a prevailing (or presumptuous) point of
view should be reconsidered from different points of view, when the range of
reasonable ways to look at a problem is not clear and the possibilities are
too numerous for a contemporary computer to examine carefully in a
reasonable amount of time.

For example, we might try opposites, and in this case I wondered about the
case where we might want to consider a 'supposedly inanimate object' that
moves in an irregular and unpredictable way.  Another example: Can
unpredictable itself be considered predictable?  To some extent the answer
is, of course it can.  The problem with using opposites is that it is an
idealization of real world situations and where using alternative ways of
looking at a problem may be useful.  Can an object be both inanimate and
animate (in the sense Mike used the term)?  Could there be another class of
things that was neither animate nor inanimate?  Is animate versus animate
really the best way to describe living versus non living?  No?

Given that the possibilities could quickly add up and given that they are
not clearly defined, it presents a major problem of complexity to the would
be designer of a true AGI program.  The problem is that it is just not
feasible to evaluate millions of variations of possibilities and then find
the best candidates within a reasonable amount of time. And this problem
does not just concern the problem of novel situations but those specific
situations that are familiar but where there are quite a few details that
are not initially understood.  While this is -clearly- a human problem, it
is a much more severe problem for contemporary AGI.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Russell Wallace
On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote:
 But, that's why it is important to force oneself to solve them in such a way 
 that it IS applicable to AGI. It doesn't mean that you have to choose a 
 problem that is so hard you can't cheat. It's unnecessary to do that unless 
 you can't control your desire to cheat. I can.

That would be relevant if it was entirely a problem of willpower and
self-discipline, but it isn't. It's also a problem of guidance. A real
problem gives you feedback at every step of the way, it keeps blowing
your ideas out of the water until you come up with one that will
actually work, that you would never have thought of in a vacuum. A toy
problem leaves you guessing, and most of your guesses will be wrong in
ways you won't know about until you come to try a real problem and
realize you have to throw all your work away.

Conversely, a toy problem doesn't make your initial job that much
easier. It means you have to write less code, sure, but what of it?
That was only ever the lesser difficulty. The main reason toy problems
are easier is that you can use lower grade methods that could never
scale up to real problems -- in other words, precisely that you can
'cheat'. But if you aren't going to cheat, you're sacrificing most of
the ease of a toy problem, while also sacrificing the priceless
feedback from a real problem -- the worst of both worlds.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
I also want to mention that I develop solutions to the toy problems with the
real problems in mind. I also fully intend to work my way up to the real
thing by incrementally adding complexity and exploring the problem well at
each level of complexity. As you do this, the flaws in the design will be
clear and I can retrace my steps to create a different solution. The benefit
to this strategy is that we fully understand the problems at each level of
complexity. When you run into something that is not accounted, you are much
more likely to know how to solve it. Despite its difficulties, I prefer my
strategy to the alternatives.

Dave

On Mon, Jun 28, 2010 at 3:56 PM, David Jones davidher...@gmail.com wrote:

 That does not have to be the case. Yes, you need to know what problems you
 might have in more complicated domains to avoid developing completely
 useless theories on toy problems. But, as you develop for full complexity
 problems, you are confronted with several sub problems. Because you have no
 previous experience, what tends to happen is you hack together a solution
 that barely works and simply isn't right or scalable because we don't have a
 full understanding of the individual sub problems. Having experience with
 the full problem is important, but forcing yourself to solve every sub
 problem at once is not a better strategy at all. You may think my strategies
 has flaws, but I know that and still chose it because the alternative
 strategies are worse.

 Dave


 On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace 
 russell.wall...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com
 wrote:
  But, that's why it is important to force oneself to solve them in such a
 way that it IS applicable to AGI. It doesn't mean that you have to choose a
 problem that is so hard you can't cheat. It's unnecessary to do that unless
 you can't control your desire to cheat. I can.

 That would be relevant if it was entirely a problem of willpower and
 self-discipline, but it isn't. It's also a problem of guidance. A real
 problem gives you feedback at every step of the way, it keeps blowing
 your ideas out of the water until you come up with one that will
 actually work, that you would never have thought of in a vacuum. A toy
 problem leaves you guessing, and most of your guesses will be wrong in
 ways you won't know about until you come to try a real problem and
 realize you have to throw all your work away.

 Conversely, a toy problem doesn't make your initial job that much
 easier. It means you have to write less code, sure, but what of it?
 That was only ever the lesser difficulty. The main reason toy problems
 are easier is that you can use lower grade methods that could never
 scale up to real problems -- in other words, precisely that you can
 'cheat'. But if you aren't going to cheat, you're sacrificing most of
 the ease of a toy problem, while also sacrificing the priceless
 feedback from a real problem -- the worst of both worlds.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Yes I have. But what I found is that real vision is so complex, involving so
many problems that must be solved and studied, that any attempt at general
vision is beyond my current abilities. It would be like expecting a single
person, such as myself, to figure out how to build the h-bomb all by
themselves back before it had ever been done. It is the same scenario
because it involves many engineering and scientific problems that must all
be solved and studied.

You see in real vision you have a 3D world, camera optics, lighting issues,
noise, blurring, rotation, distance, projection, reflection, shadows,
occlusion, etc, etc, etc.

It is many magnitudes more difficult than the problems I'm studying. Yet,
really consider the two black squares problem. Its hard! It's so simple, yet
so hard. I still haven't fully defined how to do it algorithmically... I
will get to that in the coming weeks.

So, to work on the full problem is practically impossible for me. Seeing as
though there isn't a lot of support for AGI research such as this, I am much
better served by proving the principle rather than implementing the full
solution to the real problem. If I can even prove how vision works on simple
black squares, I might be able to get help in my research... without a proof
of concept, no one will help. If I can prove it on screenshots, even better.
It would be a very significant achievement, if done in a truly general
fashion (keeping in mind that truly general is not really possible).

A great example of what happens when you work with real images is this...
Look at the current solutions. They use features, such as sift. Using sift
features, you might be able to say that an object exists with 70% certainty,
or something like that. But, it won't be able to tell you what the object
looks like, whats behind it. What is it occluding. What's next to it. What
color is it. What pixels in the image belong to it. How are those parts
attached. Etc. etc. etc. Now do you see why it makes little sense to tackle
the full problem? Even the state of the art in computer vision sucks. It is
great at certain narrow applications, but no where near where it needs to be
for AGI.

Dave

On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace
russell.wall...@gmail.comwrote:

 On Mon, Jun 28, 2010 at 8:56 PM, David Jones davidher...@gmail.com
 wrote:
  Having experience with the full problem is important, but forcing
 yourself to solve every sub problem at once is not a better strategy at all.

 Certainly going back to a toy problem _after_ gaining some experience
 with the full problem would have a much better chance of being a
 viable strategy. Have you tried that with what you're doing, i.e.
 having a go at writing a program to understand real video before going
 back to black squares and screen shots to improve the fundamentals?


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Russell Wallace
*nods* So you have tried the full problem, and caught up with the current
state-of-the-art in techniques for it? In that case...

... well, honestly, I still don't think your approach with black squares and
screenshots is going to produce any useful results. But given the above, I
no longer think you are being irrational in pursuing it. I think, as you
said, you have looked at the alternatives, all of which are very tough, and
your judgment disagrees with mine about which is the least bad.

On Mon, Jun 28, 2010 at 9:15 PM, David Jones davidher...@gmail.com wrote:

 Yes I have. But what I found is that real vision is so complex, involving
 so many problems that must be solved and studied, that any attempt at
 general vision is beyond my current abilities. It would be like expecting a
 single person, such as myself, to figure out how to build the h-bomb all by
 themselves back before it had ever been done. It is the same scenario
 because it involves many engineering and scientific problems that must all
 be solved and studied.

 You see in real vision you have a 3D world, camera optics, lighting issues,
 noise, blurring, rotation, distance, projection, reflection, shadows,
 occlusion, etc, etc, etc.

 It is many magnitudes more difficult than the problems I'm studying. Yet,
 really consider the two black squares problem. Its hard! It's so simple, yet
 so hard. I still haven't fully defined how to do it algorithmically... I
 will get to that in the coming weeks.

 So, to work on the full problem is practically impossible for me. Seeing as
 though there isn't a lot of support for AGI research such as this, I am much
 better served by proving the principle rather than implementing the full
 solution to the real problem. If I can even prove how vision works on simple
 black squares, I might be able to get help in my research... without a proof
 of concept, no one will help. If I can prove it on screenshots, even better.
 It would be a very significant achievement, if done in a truly general
 fashion (keeping in mind that truly general is not really possible).

 A great example of what happens when you work with real images is this...
 Look at the current solutions. They use features, such as sift. Using sift
 features, you might be able to say that an object exists with 70% certainty,
 or something like that. But, it won't be able to tell you what the object
 looks like, whats behind it. What is it occluding. What's next to it. What
 color is it. What pixels in the image belong to it. How are those parts
 attached. Etc. etc. etc. Now do you see why it makes little sense to tackle
 the full problem? Even the state of the art in computer vision sucks. It is
 great at certain narrow applications, but no where near where it needs to be
 for AGI.

 Dave

 On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace 
 russell.wall...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 8:56 PM, David Jones davidher...@gmail.com
 wrote:
  Having experience with the full problem is important, but forcing
 yourself to solve every sub problem at once is not a better strategy at all.

 Certainly going back to a toy problem _after_ gaining some experience
 with the full problem would have a much better chance of being a
 viable strategy. Have you tried that with what you're doing, i.e.
 having a go at writing a program to understand real video before going
 back to black squares and screen shots to improve the fundamentals?


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Mike Tintner
There would be an  insidious problem with programming computers to play poker  
that in Sid's opinion  would raise the Turing test to a higher level.

  The problem would not be whether people could figure out if they were up 
against a computer. It would be whether the computer could figure out people, 
particularly the ever-changing social dynamics in a randomly selected group of 
people. Nobody at a poker table would care whether or not the computer would 
play poker like a person.

  In fact, people would welcome a computer, since computers would tend to play 
predictably. Computers would be, by definition, predictable, which would be the 
meaning of the word 'programmed.

  ' If you would play a computer simulation for a short amount of time, you 
would learn the  machine's betting patterns, adjust would mean the computer 
would be distinguishable from a person.

  Many people would play poker as predictably as a computer. They would be 
welcomed at the table, too. If you would find a predictable poker opponent and 
would learn his or her patterns, you could exploit that knowledge for profit. 
Most people,however, have been unpredictable and human unpredictability would 
be an  advantage at poker.

  To play poker successfully, computers would not only have to develop human 
unpredictability, hey would have to learn to adjust to human unpredictability 
as well. Computers would fail miserably at the problem of adjusting to ever 
changing social conditions that would result from human interactions.

  That would be why beating a computer at poker has been so easy. Of course, 
the same requirement, the ability to adjust unpredictability, would apply to 
poker playing humans who would want to be successful.  You should go back and 
study how Sid had adjusted each hour in his poker session. However, as humans, 
we have been more accustomed to human unpredictability, so we have been far 
better at learning how to adjust.
http://www.holdempokergame.poker.tj/adjust-your-play-to-conditions-1.html

Of course, he's talking about dumb narrow AI purely-predicting-and-predictable 
computers,  we're all interested in building AGI computers that 
expect-unpredictability-and-can-react-unpredictably, right? (Wh. means  being 
predicting-and-predictable some of the time too. The real world is 
complicated.).


From: Jim Bromer 
Sent: Monday, June 28, 2010 6:35 PM
To: agi 
Subject: Re: [agi] A Primary Distinction for an AGI


  On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:


Inanimate objects normally move  *regularly,* in *patterned*/*pattern* 
ways, and *predictably.*

Animate objects normally move *irregularly*, * in *patchy*/*patchwork* 
ways, and *unbleedingpredictably* .



I think you made a major tactical error and just got caught acting the way you 
are constantly criticizing everyone else for acting.  --(Busted)--

You might say my interest is: how do we get a contemporary computer problem to 
deal with situations in which a prevailing (or presumptuous) point of view 
should be reconsidered from different points of view, when the range of 
reasonable ways to look at a problem is not clear and the possibilities are too 
numerous for a contemporary computer to examine carefully in a reasonable 
amount of time.

For example, we might try opposites, and in this case I wondered about the case 
where we might want to consider a 'supposedly inanimate object' that moves in 
an irregular and unpredictable way.  Another example: Can unpredictable 
itself be considered predictable?  To some extent the answer is, of course it 
can.  The problem with using opposites is that it is an idealization of real 
world situations and where using alternative ways of looking at a problem may 
be useful.  Can an object be both inanimate and animate (in the sense Mike used 
the term)?  Could there be another class of things that was neither animate nor 
inanimate?  Is animate versus animate really the best way to describe living 
versus non living?  No?

Given that the possibilities could quickly add up and given that they are not 
clearly defined, it presents a major problem of complexity to the would be 
designer of a true AGI program.  The problem is that it is just not feasible to 
evaluate millions of variations of possibilities and then find the best 
candidates within a reasonable amount of time. And this problem does not just 
concern the problem of novel situations but those specific situations that are 
familiar but where there are quite a few details that are not initially 
understood.  While this is -clearly- a human problem, it is a much more severe 
problem for contemporary AGI.

Jim Bromer
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Matt Mahoney
David Jones wrote:
 I also want to mention that I develop solutions to the toy problems with the 
 real problems in mind. I also fully intend to work my way up to the real 
 thing by incrementally adding complexity and exploring the problem well at 
 each level of complexity.

A little research will show you the folly of this approach. For example, the 
toy approach to language modeling is to write a simplified grammar that 
approximates English, then write a parser, then some code to analyze the parse 
tree and take some action. The classic example is SHRDLU (blocks world, 
http://en.wikipedia.org/wiki/SHRDLU ). Efforts like that have always stalled. 
That is not how people learn language. People learn from lots of examples, not 
explicit rules, and they learn semantics before grammar.

For a second example, the toy approach to modeling logical reasoning is to 
design a knowledge representation based on augmented first order logic, then 
write code to implement deduction, forward chaining, backward chaining, etc. 
The classic example is Cyc. Efforts like that have always stalled. That is not 
how people reason. People learn to associate events that occur in quick 
succession, and then reason by chaining associations. This model is built in. 
People might later learn math, programming, and formal logic as rules for 
manipulating symbols within the framework of natural language learning.

For a third example, the toy approach to modeling vision is to segment the 
image into regions and try to interpret the meaning of each region. Efforts 
like that have always stalled. That is not how people see. People learn to 
recognize visual features that they have seen before. Features are made up of 
weighted sums of lots of simpler features with learned weights. Features range 
from dots, edges, color, and motion at the lowest levels, to complex objects 
like faces at the higher levels. Vision is integrated with lots of other 
knowledge sources. You see what you expect to see.

The common theme is that real AGI consists of a learning algorithm, an opaque 
knowledge representation, and a vast amount of training data and computing 
power. It is not an extension of a toy system where you code all the knowledge 
yourself. That doesn't scale. You can't know more than an AGI that knows more 
than you. So I suggest you do a little research instead of continuing to repeat 
all the mistakes that were made 50 years ago. You aren't the first person to do 
these kinds of experiments.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 28, 2010 4:00:24 PM
Subject: Re: [agi] A Primary Distinction for an AGI

I also want to mention that I develop solutions to the toy problems with the 
real problems in mind. I also fully intend to work my way up to the real thing 
by incrementally adding complexity and exploring the problem well at each level 
of complexity. As you do this, the flaws in the design will be clear and I can 
retrace my steps to create a different solution. The benefit to this strategy 
is that we fully understand the problems at each level of complexity. When you 
run into something that is not accounted, you are much more likely to know how 
to solve it. Despite its difficulties, I prefer my strategy to the alternatives.

Dave


On Mon, Jun 28, 2010 at 3:56 PM, David Jones davidher...@gmail.com wrote:

That does not have to be the case. Yes, you need to know what problems you 
might have in more complicated domains to avoid developing completely useless 
theories on toy problems. But, as you develop for full complexity problems, 
you are confronted with several sub problems. Because you have no previous 
experience, what tends to happen is you hack together a solution that barely 
works and simply isn't right or scalable because we don't have a full 
understanding of the individual sub problems. Having experience with the full 
problem is important, but forcing yourself to solve every sub problem at once 
is not a better strategy at all. You may think my strategies has flaws, but I 
know that and still chose it because the alternative strategies are worse.

Dave



On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace russell.wall...@gmail.com 
wrote:

On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote:
 But, that's why it is important to force oneself to solve them in such a 
 way that it IS applicable to AGI. It doesn't mean that you have to choose 
 a problem that is so hard you can't cheat. It's unnecessary to do that 
 unless you can't control your desire to cheat. I can.

That would be relevant if it was entirely a problem of willpower and
self-discipline, but it isn't. It's also a problem of guidance. A real
problem gives you feedback at every step of the way, it keeps blowing
your ideas out of the water until you come up with one that will
actually work, that you would never have

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Ben Goertzel
Interestingly, the world's best AI poker program *does* work by applying
sophisticated Bayesian probability analysis to social modeling...

http://pokerparadime.com/

-- Ben

On Mon, Jun 28, 2010 at 7:02 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  There would be an  insidious problem with programming computers to play
 poker  that in Sid’s opinion  would raise the Turing test to a higher level.

   The problem would not be whether people could figure out if they were up
 against a computer. It would be whether the computer could figure out
 people, particularly the ever-changing social dynamics in a randomly
 selected group of people. Nobody at a poker table would care whether or not
 the computer would play poker like a person.

   In fact, people would welcome a computer, since computers would tend to
 play predictably. Computers would be, by definition, predictable, which
 would be the meaning of the word ‘programmed.

   ’ If you would play a computer simulation for a short amount of time, you
 would learn the  machine’s betting patterns, adjust would mean the computer
 would be distinguishable from a person.

   Many people would play poker as predictably as a computer. They would be
 welcomed at the table, too. If you would find a predictable poker opponent
 and would learn his or her patterns, you could exploit that knowledge for
 profit. Most people,however, have been unpredictable and human
 unpredictability would be an  advantage at poker.

   To play poker successfully, computers would not only have to develop
 human unpredictability, hey would have to learn to adjust to human
 unpredictability as well. Computers would fail miserably at the problem of
 adjusting to ever changing social conditions that would result from human
 interactions.

   That would be why beating a computer at poker has been so easy. Of
 course, the same requirement, the ability to adjust unpredictability, would
 apply to poker playing humans who would want to be successful.  You should
 go back and study how Sid had adjusted each hour in his poker session.
 However, as humans, we have been more accustomed to human unpredictability,
 so we have been far better at learning how to adjust.
 http://www.holdempokergame.poker.tj/adjust-your-play-to-conditions-1.html

 Of course, he's talking about dumb narrow AI
 purely-predicting-and-predictable computers,  we're all interested in
 building AGI computers that
 expect-unpredictability-and-can-react-unpredictably, right? (Wh. means
 being predicting-and-predictable some of the time too. The real world is
 complicated.).

  *From:* Jim Bromer jimbro...@gmail.com
 *Sent:* Monday, June 28, 2010 6:35 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] A Primary Distinction for an AGI

   On Mon, Jun 28, 2010 at 11:15 AM, Mike Tintner tint...@blueyonder.co.uk
  wrote:


 Inanimate objects normally move  *regularly,* in *patterned*/*pattern*
 ways, and *predictably.*

 Animate objects normally move *irregularly*, * in *patchy*/*patchwork*
 ways, and *unbleedingpredictably* .




 I think you made a major tactical error and just got caught acting the way
 you are constantly criticizing everyone else for acting.  --(Busted)--

 You might say my interest is: how do we get a contemporary computer problem
 to deal with situations in which a prevailing (or presumptuous) point of
 view should be reconsidered from different points of view, when the range of
 reasonable ways to look at a problem is not clear and the possibilities are
 too numerous for a contemporary computer to examine carefully in a
 reasonable amount of time.

 For example, we might try opposites, and in this case I wondered about the
 case where we might want to consider a 'supposedly inanimate object' that
 moves in an irregular and unpredictable way.  Another example: Can
 unpredictable itself be considered predictable?  To some extent the answer
 is, of course it can.  The problem with using opposites is that it is an
 idealization of real world situations and where using alternative ways of
 looking at a problem may be useful.  Can an object be both inanimate and
 animate (in the sense Mike used the term)?  Could there be another class of
 things that was neither animate nor inanimate?  Is animate versus animate
 really the best way to describe living versus non living?  No?

 Given that the possibilities could quickly add up and given that they are
 not clearly defined, it presents a major problem of complexity to the would
 be designer of a true AGI program.  The problem is that it is just not
 feasible to evaluate millions of variations of possibilities and then find
 the best candidates within a reasonable amount of time. And this problem
 does not just concern the problem of novel situations but those specific
 situations that are familiar but where there are quite a few details that
 are not initially understood.  While this is -clearly- a human problem, it
 is a much more severe problem for contemporary

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread David Jones
Natural language requires more than the words on the page in the real world.
Of course that didn't work.

Cyc also is trying to store knowledge about a super complicated world in
simplistic forms and also requires more data to get right.

Vision and other sensory interpretaion, on the other hand, do not require
more info because that is where the experience comes from.

On Jun 28, 2010 8:52 PM, Matt Mahoney matmaho...@yahoo.com wrote:

David Jones wrote:
 I also want to mention that I develop solutions to the toy problems with
the re...
A little research will show you the folly of this approach. For example, the
toy approach to language modeling is to write a simplified grammar that
approximates English, then write a parser, then some code to analyze the
parse tree and take some action. The classic example is SHRDLU (blocks
world, http://en.wikipedia.org/wiki/SHRDLU ). Efforts like that have always
stalled. That is not how people learn language. People learn from lots of
examples, not explicit rules, and they learn semantics before grammar.

For a second example, the toy approach to modeling logical reasoning is to
design a knowledge representation based on augmented first order logic, then
write code to implement deduction, forward chaining, backward chaining, etc.
The classic example is Cyc. Efforts like that have always stalled. That is
not how people reason. People learn to associate events that occur in quick
succession, and then reason by chaining associations. This model is built
in. People might later learn math, programming, and formal logic as rules
for manipulating symbols within the framework of natural language learning.

For a third example, the toy approach to modeling vision is to segment the
image into regions and try to interpret the meaning of each region. Efforts
like that have always stalled. That is not how people see. People learn to
recognize visual features that they have seen before. Features are made up
of weighted sums of lots of simpler features with learned weights. Features
range from dots, edges, color, and motion at the lowest levels, to complex
objects like faces at the higher levels. Vision is integrated with lots of
other knowledge sources. You see what you expect to see.

The common theme is that real AGI consists of a learning algorithm, an
opaque knowledge representation, and a vast amount of training data and
computing power. It is not an extension of a toy system where you code all
the knowledge yourself. That doesn't scale. You can't know more than an AGI
that knows more than you. So I suggest you do a little research instead of
continuing to repeat all the mistakes that were made 50 years ago. You
aren't the first person to do these kinds of experiments.


-- Matt Mahoney, matmaho...@yahoo.com


--
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Mon, June 28, 2010 4:00:24 PM


Subject: Re: [agi] A Primary Distinction for an AGI


I also want to mention that I develop solutions to the toy problems with the
real problems in mind

On Mon, Jun 28, 2010 at 3:56 PM, David Jones davidher...@gmail.com wrote:

 
  That does not have to be the case. Yes, you need to know what problems
 you might have in more co...


  On Mon, Jun 28, 2010 at 3:41 PM, Russell Wallace 
 russell.wall...@gmail.com wrote:

 
  On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com
 wrote:
   But, that's w...

 
 
  ---
  agi
  Archives: https://www.listbox.com/mem...
 Modify Your Subscription: https://www.listbox.com/member/?;


  Powered by Listbox: http://www.listbox.com



agi | Archives | Modify Your Subscription
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Michael Swan

On Mon, 2010-06-28 at 16:15 +0100, Mike Tintner wrote:
 That's why Michael can't bear to even contemplate a world in which
 things 
 and people behave unpredictably. (And Ben can't bear to contemplate a 
 stockmarket that is obviously unpredictable).
 
 If he were an artist his instincts would be the opposite - he'd go for
 the 
 irregular and patchy and unpredictable twists. If he were drawing a
 box 
 going across a screen, he would have to put some irregularity in 
 omewhere  - put in some fits and starts and stops - there's always an 
 irregular twist in the picture or the tale. An artist has to put some 
 surprise and life into what he does -

You patternise the things that are patternisable - like an erratic
waving arm is still an arm, and it's pattern is erratic. Also, note I
used to be a art and animation lecturer for 2 years ;) 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com