[agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html

McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Open Sets vs Closed Sets

2010-07-02 Thread David Jones
narrow AI is a term that describes the solution to a problem, not the
problem. It is a solution with a narrow scope. General AI on the other hand
should have a much larger scope than narrow ai and be able to handle
unforseen circumstances.

What I don't think you realize is that open sets can be described by closed
sets. Here is an example from my own research. The set of objects I'm
allowing in the simplest case studies so far are black squares. This is a
closed set. But, the number, movement and relative positions of these
squares is an open set. I can define an infinite number of ways in which a 0
to infinite number of black squares can move. If I define a general AI
algorithm, it should be able to handle the infinite subset of the open set
that is representative of some aspect of the real world. We could also study
case studies that are not representative of the environment though.

The example I just gave is a completely open set, yet an algorithm could
handle such an open set, and I am designing for it. So, your claim that no
one is studying or handling such things is not right.

Dave
On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I'd like opinions on terminology here.

 IMO the opposition of closed sets vs open sets is fundamental to the
 difference between narrow AI and AGI.

 However I notice that these terms have different meanings to mine in maths.

 What I mean is:

 closed set: contains a definable number and *kinds/species* of objects

 open set: contains an undefinable number and *kinds/species* of objects
 (what we in casual, careless conversation describe as containing all kinds
 of things);  the rules of an open set allow adding new kinds of things ad
 infinitum

 Narrow AI's operate in artificial environments containing closed sets of
 objects - all of wh. are definable. AGI's operate in real world environments
 containing open sets of objects - some of wh. will be definable, and some
 definitely not

 To engage in any real world activity, like walking down a street or
 searching/tidying a room or reading a science book/text is to  operate
 with open sets of objects,  because the next field of operations - the
 next street or room or text -  may and almost certainly will have
 unpredictably different kinds of objects from the last.

 Any objections to my use of these terms, or suggestions that I should use
 others?

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] masterpiece on an iPad

2010-07-02 Thread Matt Mahoney
It could be done a lot faster if the iPad had a camera.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad


http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html
 
McLuhan argues that touch is the central sense - 
the one that binds the others. He may be right. The i-devices integrate touch 
into intelligence.
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Open Sets vs Closed Sets

2010-07-02 Thread Mike Tintner
Well, first, you're not dealing with open sets in my broad sense - containing a 
potentially unlimited number of different SPECIES of things. 

[N.B.  Extension to my definitions here - I should have added that all members 
of a set have fundamental SIMILARITIES or RELATIONSHIPS - and the set is 
constrained. An open set does not incl. everything under the sun (unless that 
is the title of the set). So a set may be everything in that room or that 
street or text but will not incl. everything under the sun]

With respect to your example, a relevant broadly open-species set then might be 
regular shapes or geometric shapes  incl. most shapes in geometry, (or if 
you prefer, more limited sections of geometry) - where species = different 
kinds of shapes - squares, triangles,fractals etc. I can't see how your work 
with squares will prepare you to deal with a broad range of geometric shapes - 
please explain.  AFAICT you have take a very closed geometric space/set.

More narrowly, you raise a v. interesting question. Let us take a set of just 
one or a v. few objects, as you seem to be doing - say one or two black 
squares. The relevant set then is something like all the positionings [or 
movements] of two black squares within a given area [like a screen].   The set 
is principally one of square positions.

You make the bold claim:I can define an infinite number of ways in which a 0 
to infinite number of black squares can move. - Are you then saying your 
program can deal with every positioning/configuration of two squares on a 
screen? [I'm making this simple as pos]. I would say;no way. That is an open 
set of positions. And one can talk of different species of positions [tho I 
must say I haven't thought much about this] 

And this is a subject  IMO of central AGI importance - the predictability of 
object positions and movements.

If you could solve this, your program would in fairly shortly order become a 
great inventor - for finding new ways to position and apply objects is central 
to a vast amount of invention. But it is absolutely,impossible to do what 
you're claiming -  there are an infinity of non-formulaic, non-predictable - 
and therefore always new - ways to position objects - and that's why invention 
(and coming up with the idea of Chicken Kiev - putting the gravy inside instead 
of outside the food] is so hard. We're talking here about the fundamental 
nature of objects and space.




From: David Jones 
Sent: Friday, July 02, 2010 1:53 PM
To: agi 
Subject: Re: [agi] Open Sets vs Closed Sets


narrow AI is a term that describes the solution to a problem, not the problem. 
It is a solution with a narrow scope. General AI on the other hand should have 
a much larger scope than narrow ai and be able to handle unforseen 
circumstances. 

What I don't think you realize is that open sets can be described by closed 
sets. Here is an example from my own research. The set of objects I'm allowing 
in the simplest case studies so far are black squares. This is a closed set. 
But, the number, movement and relative positions of these squares is an open 
set. I can define an infinite number of ways in which a 0 to infinite number of 
black squares can move. If I define a general AI algorithm, it should be able 
to handle the infinite subset of the open set that is representative of some 
aspect of the real world. We could also study case studies that are not 
representative of the environment though.

The example I just gave is a completely open set, yet an algorithm could handle 
such an open set, and I am designing for it. So, your claim that no one is 
studying or handling such things is not right.

Dave

On Wed, Jun 30, 2010 at 8:58 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I'd like opinions on terminology here.

  IMO the opposition of closed sets vs open sets is fundamental to the 
difference between narrow AI and AGI.

  However I notice that these terms have different meanings to mine in maths.

  What I mean is:

  closed set: contains a definable number and *kinds/species* of objects

  open set: contains an undefinable number and *kinds/species* of objects  
(what we in casual, careless conversation describe as containing all kinds of 
things);  the rules of an open set allow adding new kinds of things ad 
infinitum

  Narrow AI's operate in artificial environments containing closed sets of 
objects - all of wh. are definable. AGI's operate in real world environments 
containing open sets of objects - some of wh. will be definable, and some  
definitely not

  To engage in any real world activity, like walking down a street or 
searching/tidying a room or reading a science book/text is to  operate with 
open sets of objects,  because the next field of operations - the next street 
or room or text -  may and almost certainly will have unpredictably different 
kinds of objects from the last.

  Any objections to my use of these terms, or suggestions that I should use 
others?


Re: [agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
that's like saying cartography or cartoons could be done a lot faster if they 
just used cameras -  ask Michael to explain what the hand can draw that the 
camera can't


From: Matt Mahoney 
Sent: Friday, July 02, 2010 2:21 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad


It could be done a lot faster if the iPad had a camera.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad


http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html

McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.
  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] masterpiece on an iPad

2010-07-02 Thread Matt Mahoney
AGI is all about building machines that think, so you don't have to.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad


that's like saying cartography or 
cartoons could be done a lot faster if they just used cameras -  ask 
Michael to explain what the hand can draw that the 
camera can't


From: Matt Mahoney 
Sent: Friday, July 02, 2010 2:21 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad

It could be done a lot faster if the iPad had a camera.

 -- Matt Mahoney, matmaho...@yahoo.com 





 From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 
AM
Subject: [agi] masterpiece 
on an iPad


http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html
 
McLuhan argues that touch is the central sense - 
the one that binds the others. He may be right. The i-devices integrate touch 
into intelligence.
agi | Archives  | Modify Your Subscription  
agi | Archives  | Modify Your Subscription   
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] masterpiece on an iPad

2010-07-02 Thread Mike Tintner
Matt: 
AGI is all about building machines that think, so you don't have to.

Matt,

I'm afraid that's equally silly and also shows a similar lack of understanding 
of sensors and semiotics.

An AGI robot won't know what it's like to live inside a human skin, and will 
have limited understanding of our life problems -different body, different 
sensors, different body metaphors, and ergo different connotations for signs it 
may use.

So, sorry, you're just going to have to keep thinking.

Funny this, because I just posted the following elsewhere:

What's The Difference between Dawkins  The Pope?

We are survival machines-robot vehicles blindly programmed to preserve the 
selfish molecules known as genes 
The Pope

Why did God make you? God made me to know him, love him and serve him in this 
world, and be with him forever in the next
Richard Dawkins

God, genes, what's the diff, ? Same basic urge to subordinate the human to a 
higher purpose, to be worshipped and adored. Is there any real difference 
between so many scientists and religious here? 

{And one might add, AGI-ers with their omnipotent SuperAGI  -  in nomine 
Turing, et Neumann, et Minsky].




From: Matt Mahoney 
Sent: Friday, July 02, 2010 3:20 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad


AGI is all about building machines that think, so you don't have to.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad


that's like saying cartography or cartoons could be done a lot faster if they 
just used cameras -  ask Michael to explain what the hand can draw that the 
camera can't


From: Matt Mahoney 
Sent: Friday, July 02, 2010 2:21 PM
To: agi 
Subject: Re: [agi] masterpiece on an iPad


It could be done a lot faster if the iPad had a camera.

 
-- Matt Mahoney, matmaho...@yahoo.com 






From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad


http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html

McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.
  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   

  agi | Archives  | Modify Your Subscription  

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
An AGI may not really think like we do, it may just execute code. 

 

Though I suppose you could program a lot of fuzzy loops and idle
speculation, entertaining possibilities, having human think envy.. 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 8:21 AM
To: agi
Subject: Re: [agi] masterpiece on an iPad

 

AGI is all about building machines that think, so you don't have to.


 

-- Matt Mahoney, matmaho...@yahoo.com

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad

that's like saying cartography or cartoons could be done a lot faster if
they just used cameras -  ask Michael to explain what the hand can draw that
the camera can't

 

From: Matt Mahoney mailto:matmaho...@yahoo.com  

Sent: Friday, July 02, 2010 2:21 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] masterpiece on an iPad

 

It could be done a lot faster if the iPad had a camera.


 

-- Matt Mahoney, matmaho...@yahoo.com 

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad

http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-crea
tes-masterpiece-on-an-iPad.html

 

McLuhan argues that touch is the central sense - the one that binds the
others. He may be right. The i-devices integrate touch into intelligence.


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription 

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Reward function vs utility

2010-07-02 Thread Joshua Fox
I found the answer as given by Legg, *Machine Superintelligence*, p. 72,
copied below. A reward function is used to bypass potential difficulty in
communicating a utility function to the agent.

Joshua

The existence of a goal raises the problem of how the agent knows what the
goal is. One possibility would be for the goal to be known in advance and
for this knowledge to be built into the agent. The problem with this is that
it limits each agent to just one goal. We need to allow agents that are more
flexible, specifically, we need to be able to inform the agent of what the
goal
is. For humans this is easily done using language. In general however, the
possession of a suffciently high level of language is too strong an
assumption
to make about the agent. Indeed, even for something as intelligent as a dog
or a cat, direct explanation is not very effective.

Fortunately there is another possibility which is, in some sense, a blend of
the above two. We define an additional communication channel with the sim-
plest possible semantics: a signal that indicates how good the agent’s
current
situation is. We will call this signal the reward. The agent simply has to
maximise the amount of reward it receives, which is a function of the goal.
In
a complex setting the agent might be rewarded for winning a game or solving
a puzzle. If the agent is to succeed in its environment, that is, receive a
lot of
reward, it must learn about the structure of the environment and in
particular
what it needs to do in order to get reward.




On Mon, Jun 28, 2010 at 1:32 AM, Ben Goertzel b...@goertzel.org wrote:

 You can always build the utility function into the assumed universal Turing
 machine underlying the definition of algorithmic information...

 I guess this will improve learning rate by some additive constant, in the
 long run ;)

 ben

 On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox joshuat...@gmail.com wrote:

 This has probably been discussed at length, so I will appreciate a
 reference on this:

 Why does Legg's definition of intelligence (following on Hutters' AIXI and
 related work) involve a reward function rather than a utility function? For
 this purpose, reward is a function of the word state/history which is
 unknown to the agent while  a utility function is known to the agent.

 Even if  we replace the former with the latter, we can still have a
 definition of intelligence that integrates optimization capacity over
 possible all utility functions.

 What is the real  significance of the difference between the two types of
 functions here?

 Joshua
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 
 “When nothing seems to help, I go look at a stonecutter hammering away at
 his rock, perhaps a hundred times without as much as a crack showing in it.
 Yet at the hundred and first blow it will split in two, and I know it was
 not that blow that did it, but all that had gone before.”

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, what evidence do you have that Occam's Razor ... is wrong, besides
 your own opinions? It is well established that elegant (short) theories are
 preferred in all branches of science because they have greater predictive
 power.



  -- Matt Mahoney, matmaho...@yahoo.com


When a heuristic is used as if it were an axiom of truth, it will interfere
in the development of reasonable insight just because an heuristic is not an
axiom.  Now to apply this heuristic (which does have value) as an
unquestionable axiom of mind, you are making a more egregious claim because
you are multiplying the force of the error.

Occam's razor has greater predictive power within the boundaries of the
isolation experiments which have the greatest potential to enhance its
power.  If simplest theories are preferred because they have the greater
predictive power, then it would follow that isolation experiments would be
the preferred vehicles of science just because they can produce theories
that had the most predictive power.  Whether this is the case or not (the
popular opinion), it does not answer the question of whether narrow AI (for
example) should be the preferred child of computer science just because the
theorems of narrow AI are so much better at predicting their (narrow) events
than the theorems of AGI are at comprehending their (more
complicated) events.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, what evidence do you have that Occam's Razor or algorithmic
 information theory is wrong,
 Also, what does this have to do with Cantor's diagonalization argument? AIT
 considers only the countably infinite set of hypotheses.


 -- Matt Mahoney, matmaho...@yahoo.com



There cannot be a one to one correspondence to the representation of the
shortest program to produce a string and the strings that they produce.
This means that if the consideration of the hypotheses were to be put into
general mathematical form it must include the potential of many to one
relations between candidate programs (or subprograms) and output strings.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:09 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jun 30, 2010 at 5:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim, what evidence do you have that Occam's Razor or algorithmic
 information theory is wrong,
 Also, what does this have to do with Cantor's diagonalization argument?
 AIT considers only the countably infinite set of hypotheses.
  -- Matt Mahoney, matmaho...@yahoo.com



  There cannot be a one to one correspondence to the representation of the
 shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.


But, there is also no way to determine what the shortest program is, since
there may be different programs that are the same length.  That means that
there is a many to one relation between programs and program length.  So
the claim that you could just iterate through programs *by length* is
false.  This is the goal of algorithmic information theory not a premise
of a methodology that can be used.  So you have the diagonalization problem.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Reward function vs utility

2010-07-02 Thread Steve Richfield
To all,

There may be a fundamental misdirection here on this thread, for your
consideration...

There have been some very rare cases where people have lost the use of one
hemisphere of their brains, and then subsequently recovered, usually with
the help of recently-developed clot-removal surgery. What they report seems
to be completely at odds with the present discussion. I will summarize and
probably overgeneralize, because there aren't many such survivors. One was a
brain researcher who subsequently wrote a book, about which I heard a review
on the radio, but I don't remember the details like title or name.
Hopefully, one of you has found and read this book.

It appears that one hemisphere is a *completely* passive observer, that does
*not* even bother to distinguish you and not-you, other than noting a
probable boundary. The other hemisphere concerns itself with manipulating
the world, regardless of whether particular pieces of it are you or not-you.
It seems unlikely that reward could have any effect at all on the passive
observer hemisphere.

In the case of the author of the book, apparently the manipulating
hemisphere was knocked out of commission for a while, and then slowly
recovered. This allowed her to see the passively observed world, without the
overlay of the manipulating hemisphere. Obviously, this involved severe
physical impairment until she recovered.

Note that AFAIK all of the AGI efforts are egocentric, while half of our
brains are concerned with passively filtering/understanding the world enough
to apply egocentric logic. Note further that since the two hemispheres are
built from the same types of neurons, that the computations needed to do
these two very different tasks are performed by the same wet-stuff. There is
apparently some sort of advanced Turing machine sort of concept going on
in wetware.

This sounds to me like a must-read for any AGIer, and I certainly would have
read it, had I been one.

Hence, I see goal direction, reward, etc., as potentially useful only in
some tiny part of our brains.

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] masterpiece on an iPad

2010-07-02 Thread Matt Mahoney
An AGI only has to predict your behavior so that it can serve you better by 
giving you what you want without you asking for it. It is not a copy of your 
mind. It is a program that can call a function that simulates your mind for 
some arbitrary purpose determined by its programmer.

 -- Matt Mahoney, matmaho...@yahoo.com





From: John G. Rose johnr...@polyplexic.com
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 11:39:23 AM
Subject: RE: [agi] masterpiece on an iPad


An AGI may not really think like we do, it may just execute code. 
 
Though I suppose you could program a lot of fuzzy loops and idle speculation, 
entertaining possibilities, having human think envy.. 
 
John
 
From:Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 8:21 AM
To: agi
Subject: Re: [agi] masterpiece on an iPad
 
AGI is all about building machines that think, so you don't have to.

 
-- Matt Mahoney, matmaho...@yahoo.com
 
 



From:Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad
that's like saying cartography or cartoons could be done a lot faster if they 
just used cameras -  ask Michael to explain what the hand can draw that the 
camera can't
 
From:Matt Mahoney 
Sent:Friday, July 02, 2010 2:21 PM
To:agi 
Subject:Re: [agi] masterpiece on an iPad
 
It could be done a lot faster if the iPad had a camera.

 
-- Matt Mahoney, matmaho...@yahoo.com 
 
 



From:Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad
http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-creates-masterpiece-on-an-iPad.html
 
McLuhan argues that touch is the central sense - the one that binds the others. 
He may be right. The i-devices integrate touch into intelligence.
agi| Archives | Modify Your Subscription  
agi| Archives| Modify Your Subscription  
 
agi| Archives| Modify Your Subscription  
agi| Archives| ModifyYour Subscription  
 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
Sounds like everyone would want one, or, one AGI could service us all. And
that AGI could do all of the heavy thinking for us. We could become pleasure
seeking, fibrillating blobs of flesh and bone suckling on the electronic
brains of one big giant AGI.

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 1:16 PM
To: agi
Subject: Re: [agi] masterpiece on an iPad

 

An AGI only has to predict your behavior so that it can serve you better by
giving you what you want without you asking for it. It is not a copy of your
mind. It is a program that can call a function that simulates your mind for
some arbitrary purpose determined by its programmer.


 

-- Matt Mahoney, matmaho...@yahoo.com

 

 

  _  

From: John G. Rose johnr...@polyplexic.com
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 11:39:23 AM
Subject: RE: [agi] masterpiece on an iPad

An AGI may not really think like we do, it may just execute code. 

 

Though I suppose you could program a lot of fuzzy loops and idle
speculation, entertaining possibilities, having human think envy.. 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 8:21 AM
To: agi
Subject: Re: [agi] masterpiece on an iPad

 

AGI is all about building machines that think, so you don't have to.


 

-- Matt Mahoney, matmaho...@yahoo.com

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad

that's like saying cartography or cartoons could be done a lot faster if
they just used cameras -  ask Michael to explain what the hand can draw that
the camera can't

 

From: Matt Mahoney mailto:matmaho...@yahoo.com  

Sent: Friday, July 02, 2010 2:21 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] masterpiece on an iPad

 

It could be done a lot faster if the iPad had a camera.


 

-- Matt Mahoney, matmaho...@yahoo.com 

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad

http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-crea
tes-masterpiece-on-an-iPad.html

 

McLuhan argues that touch is the central sense - the one that binds the
others. He may be right. The i-devices integrate touch into intelligence.


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription



 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription 



 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription



 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Jim Bromer
On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:

There cannot be a one to one correspondence to the representation of
 the shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.



 But, there is also no way to determine what the shortest program is,
 since there may be different programs that are the same length.  That means
 that there is a many to one relation between programs and program length.
 So the claim that you could just iterate through programs *by length* is
 false.  This is the goal of algorithmic information theory not a premise
 of a methodology that can be used.  So you have the diagonalization problem.



A counter argument is that there are only a finite number of Turing Machine
programs of a given length.  However, since you guys have specifically
designated that this theorem applies to any construction of a Turing Machine
it is not clear that this counter argument can be used.  And there is still
the specific problem that you might want to try a program that writes a
longer program to output a string (or many strings).  Or you might want to
write a program that can be called to write longer programs on a dynamic
basis.  I think these cases, where you might consider a program that outputs
a longer program, (or another instruction string for another Turing
Machine) constitutes a serious problem, that at the least, deserves to be
answered with sound analysis.

Part of my original intuitive argument, that I formed some years ago, was
that without a heavy constraint on the instructions for the program, it will
be practically impossible to test or declare that some program is indeed the
shortest program.  However, I can't quite get to the point now that I can
say that there is definitely a diagonalization problem.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread Matt Mahoney
Jim, to address all of your points,

Solomonoff induction claims that the probability of a string is proportional to 
the number of programs that output the string, where each program M is weighted 
by 2^-|M|. The probability is dominated by the shortest program (Kolmogorov 
complexity), but it is not exactly the same. The difference is small enough 
that we may neglect it, just as we neglect differences that depend on choice of 
language.

Here is the proof that Kolmogorov complexity is not computable. Suppose it 
were. Then I could test the Kolmogorov complexity of strings in increasing 
order of length (breaking ties lexicographically) and describe the first 
string that cannot be described in less than a million bits, contradicting the 
fact that I just did. (Formally, I could write a program that outputs the first 
string whose Kolmogorov complexity is at least n bits, choosing n to be larger 
than my program).

Here is the argument that Occam's Razor and Solomonoff distribution must be 
true. Consider all possible probability distributions p(x) over any infinite 
set X of possible finite strings x, i.e. any X = {x: p(x)  0} that is 
infinite. All such distributions must favor shorter strings over longer ones. 
Consider any x in X. Then p(x)  0. There can be at most a finite number (less 
than 1/p(x)) of strings that are more likely than x, and therefore an infinite 
number of strings which are less likely than x. Of this infinite set, only a 
finite number (less than 2^|x|) can be shorter than x, and therefore there must 
be an infinite number that are longer than x. So for each x we can partition X 
into 4 subsets as follows:

- shorter and more likely than x: finite
- shorter and less likely than x: finite
- longer and more likely than x: finite
- longer and less likely than x: infinite.

So in this sense, any distribution over the set of strings must favor shorter 
strings over longer ones.

-- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 4:09:38 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI




On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:  
There cannot be a one to one correspondence to the representation of the 
shortest program to produce a string and the strings that they produce.  This 
means that if the consideration of the hypotheses were to be put into general 
mathematical form it must include the potential of many to one relations 
between candidate programs (or subprograms) and output strings.
 
But, there is also no way to determine what the shortest program is, since 
there may be different programs that are the same length.  That means that 
there is a many to one relation between programs and program length.  So the 
claim that you could just iterate through programs by length is false.  This is 
the goal of algorithmic information theory not a premise of a methodology that 
can be used.  So you have the diagonalization problem. 
 
A counter argument is that there are only a finite number of Turing Machine 
programs of a given length.  However, since you guys have specifically 
designated that this theorem applies to any construction of a Turing Machine it 
is not clear that this counter argument can be used.  And there is still the 
specific problem that you might want to try a program that writes a longer 
program to output a string (or many strings).  Or you might want to write a 
program that can be called to write longer programs on a dynamic basis.  I 
think these cases, where you might consider a program that outputs a longer 
program, (or another instruction string for another Turing Machine) constitutes 
a serious problem, that at the least, deserves to be answered with sound 
analysis.
 
Part of my original intuitive argument, that I formed some years ago, was that 
without a heavy constraint on the instructions for the program, it will be 
practically impossible to test or declare that some program is indeed the 
shortest program.  However, I can't quite get to the point now that I can say 
that there is definitely a diagonalization problem.
 
Jim Bromer

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-02 Thread David Jones
Nice Occam's Razor argument. I understood it simply because I knew there are
always an infinite number of possible explanations for every observation
that are more complicated than the simplest explanation. So, without a
reason to choose one of those other interpretations, then why choose it? You
could look for reasons in complex environments, but it would likely be more
efficient to wait for a reason to need a better explanation. It's more
efficient to wait for an inconsistency than to search an infinite set
without a reason to do so.

Dave

On Fri, Jul 2, 2010 at 6:08 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim, to address all of your points,

 Solomonoff induction claims that the probability of a string is
 proportional to the number of programs that output the string, where each
 program M is weighted by 2^-|M|. The probability is dominated by the
 shortest program (Kolmogorov complexity), but it is not exactly the same.
 The difference is small enough that we may neglect it, just as we neglect
 differences that depend on choice of language.

 Here is the proof that Kolmogorov complexity is not computable. Suppose it
 were. Then I could test the Kolmogorov complexity of strings in increasing
 order of length (breaking ties lexicographically) and describe the first
 string that cannot be described in less than a million bits, contradicting
 the fact that I just did. (Formally, I could write a program that outputs
 the first string whose Kolmogorov complexity is at least n bits, choosing n
 to be larger than my program).

 Here is the argument that Occam's Razor and Solomonoff distribution must be
 true. Consider all possible probability distributions p(x) over any infinite
 set X of possible finite strings x, i.e. any X = {x: p(x)  0} that is
 infinite. All such distributions must favor shorter strings over longer
 ones. Consider any x in X. Then p(x)  0. There can be at most a finite
 number (less than 1/p(x)) of strings that are more likely than x, and
 therefore an infinite number of strings which are less likely than x. Of
 this infinite set, only a finite number (less than 2^|x|) can be shorter
 than x, and therefore there must be an infinite number that are longer than
 x. So for each x we can partition X into 4 subsets as follows:

 - shorter and more likely than x: finite
 - shorter and less likely than x: finite
 - longer and more likely than x: finite
 - longer and less likely than x: infinite.

 So in this sense, any distribution over the set of strings must favor
 shorter strings over longer ones.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Fri, July 2, 2010 4:09:38 PM

 *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI



 On Fri, Jul 2, 2010 at 2:25 PM, Jim Bromer jimbro...@gmail.com wrote:

There cannot be a one to one correspondence to the representation of
 the shortest program to produce a string and the strings that they produce.
 This means that if the consideration of the hypotheses were to be put into
 general mathematical form it must include the potential of many to one
 relations between candidate programs (or subprograms) and output strings.



 But, there is also no way to determine what the shortest program is,
 since there may be different programs that are the same length.  That means
 that there is a many to one relation between programs and program length.
 So the claim that you could just iterate through programs *by length* is
 false.  This is the goal of algorithmic information theory not a premise
 of a methodology that can be used.  So you have the diagonalization problem.



 A counter argument is that there are only a finite number of Turing Machine
 programs of a given length.  However, since you guys have specifically
 designated that this theorem applies to any construction of a Turing Machine
 it is not clear that this counter argument can be used.  And there is still
 the specific problem that you might want to try a program that writes a
 longer program to output a string (or many strings).  Or you might want to
 write a program that can be called to write longer programs on a dynamic
 basis.  I think these cases, where you might consider a program that outputs
 a longer program, (or another instruction string for another Turing
 Machine) constitutes a serious problem, that at the least, deserves to be
 answered with sound analysis.

 Part of my original intuitive argument, that I formed some years ago, was
 that without a heavy constraint on the instructions for the program, it will
 be practically impossible to test or declare that some program is indeed the
 shortest program.  However, I can't quite get to the point now that I can
 say that there is definitely a diagonalization problem.

 Jim Bromer

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ |