Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Mike Tintner
Matt:It is like the way evolution works, except that there is a human in the 
loop to make the process a little more intelligent.

IOW this is like AGI, except that it's narrow AI. That's the whole point - you 
have to remove the human from the loop.  In fact, it also sounds like a 
misconceived and rather literal idea of evolution as opposed to the reality.


From: Matt Mahoney 
Sent: Monday, June 21, 2010 3:01 AM
To: agi 
Subject: Re: [agi] An alternative plan to discover self-organization theory


Steve Richfield wrote:
 He suggested that I construct a simple NN that couldn't work without self 
 organizing, and make dozens/hundreds of different neuron and synapse 
 operational characteristics selectable ala genetic programming, put it on the 
 fastest computer I could get my hands on, turn it loose trying arbitrary 
 combinations of characteristics, and see what the winning combination turns 
 out to be. Then, armed with that knowledge, refine the genetic 
 characteristics and do it again, and iterate until it efficiently self 
 organizes. This might go on for months, but self-organization theory might 
 just emerge from such an effort. 


Well, that is the process that created human intelligence, no? But months? It 
actually took 3 billion years on a planet sized molecular computer.


That doesn't mean it won't work. It just means you have to narrow your search 
space and lower your goals.


I can give you an example of a similar process. Look at the code for 
PAQ8HP12ANY and LPAQ9M data compressors by Alexander Ratushnyak, which are the 
basis of winning Hutter prize submissions. The basic principle is that you have 
a model that receives a stream of bits from an unknown source and it uses a 
complex hierarchy of models to predict the next bit. It is sort of like a 
neural network because it averages together the results of lots of adaptive 
pattern recognizers by processes that are themselves adaptive. But I would 
describe the code as inscrutable, kind of like your DNA. There are lots of 
parameters to tweak, such as how to preprocess the data, arrange the 
dictionary, compute various contexts, arrange the order of prediction flows, 
adjust various learning rates and storage capacities, and make various 
tradeoffs sacrificing compression to meet memory and speed requirements. It is 
simple to describe the process of writing the code. You make random changes and 
keep the ones that work. It is like the way evolution works, except that there 
is a human in the loop to make the process a little more intelligent.


There are also fully automated optimizers for compression algorithms, but they 
are more limited in their search space. For example, the experimental PPM based 
EPM by Serge Osnach includes a program EPMOPT that adjusts 20 numeric 
parameters up or down using a hill climbing search to find the best 
compression. It can be very slow. Another program, M1X2 by Christopher Mattern, 
uses a context mixing (PAQ like) algorithm in which the contexts are selected 
by using a hill climbing genetic algorithm to select a set of 64-bit masks. One 
version was run for 3 days to find the best options to compress a file that 
normally takes 45 seconds.



-- Matt Mahoney, matmaho...@yahoo.com 






From: Steve Richfield steve.richfi...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 20, 2010 2:06:55 AM
Subject: [agi] An alternative plan to discover self-organization theory

No, I haven't been smokin' any wacky tobacy. Instead, I was having a long talk 
with my son Eddie, about self-organization theory. This is his proposal:

He suggested that I construct a simple NN that couldn't work without self 
organizing, and make dozens/hundreds of different neuron and synapse 
operational characteristics selectable ala genetic programming, put it on the 
fastest computer I could get my hands on, turn it loose trying arbitrary 
combinations of characteristics, and see what the winning combination turns 
out to be. Then, armed with that knowledge, refine the genetic characteristics 
and do it again, and iterate until it efficiently self organizes. This might go 
on for months, but self-organization theory might just emerge from such an 
effort. I had a bunch of objections to his approach, e.g.

Q.  What if it needs something REALLY strange to work?
A.  Who better than you to come up with a long list of really strange 
functionality?

Q.  There are at least hundreds of bits in the genome.
A.  Try combinations in pseudo-random order, with each bit getting asserted in 
~half of the tests. If/when you stumble onto a combination that sort of works, 
switch to varying the bits one-at-a-time, and iterate in this way until the 
best combination is found.

Q.  Where are we if this just burns electricity for a few months and finds 
nothing?
A.  Print out the best combination, break out the wacky tobacy, and come up 
with even better/crazier 

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Matt Mahoney
Mike Tintner wrote:
 Matt:It is like the way evolution works, except that there is a human in the 
 loop to make the process a little more intelligent.
  
 IOW this is like AGI, except that it's narrow AI. That's the whole point - 
 you have to remove the human from the loop.  In fact, it also sounds like a 
 misconceived and rather literal idea of evolution as opposed to the reality. 
You're right. It is narrow AI. You keep pointing out that we haven't solved the 
general problem. You are absolutely correct.

So, do you have any constructive ideas on how to solve it? Preferably something 
that takes less than 3 billion years on a planet sized molecular computer.

-- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, June 21, 2010 7:59:29 AM
Subject: Re: [agi] An alternative plan to discover self-organization theory


Matt:It is like 
the way evolution works, except that there is a human in the loop to make the 
process a little more intelligent.
 
IOW this is like AGI, except that it's narrow AI. 
That's the whole point - you have to remove the human from the loop.  In 
fact, it also sounds like a misconceived and rather literal idea of evolution 
as 
opposed to the reality.


From: Matt Mahoney 
Sent: Monday, June 21, 2010 3:01 AM
To: agi 
Subject: Re: [agi] An alternative plan to discover self-organization 
theory

Steve Richfield wrote:
 He suggested that I construct a simple 
NN that couldn't work without self organizing, and make dozens/hundreds of 
different neuron and synapse operational characteristics selectable ala genetic 
programming, put it on the fastest computer I could get my hands on, turn it 
loose trying arbitrary combinations of characteristics, and see what the 
winning combination turns out to be. Then, armed with that knowledge, refine 
the genetic characteristics and do it again, and iterate until 
it efficiently self organizes. This might go on for months, but 
self-organization theory might just emerge from such an effort. 

Well, that is the process that created human intelligence, no? But months? 
It actually took 3 billion years on a planet sized molecular computer.

That doesn't mean it won't work. It just means you have to narrow your 
search space and lower your goals.

I can give you an example of a similar process. Look at the code for 
PAQ8HP12ANY and LPAQ9M data compressors by Alexander Ratushnyak, which are the 
basis of winning Hutter prize submissions. The basic principle is that you have 
a model that receives a stream of bits from an unknown source and it uses a 
complex hierarchy of models to predict the next bit. It is sort of like a 
neural 
network because it averages together the results of lots of adaptive pattern 
recognizers by processes that are themselves adaptive. But I would describe the 
code as inscrutable, kind of like your DNA. There are lots of parameters to 
tweak, such as how to preprocess the data, arrange the dictionary, compute 
various contexts, arrange the order of prediction flows, adjust various 
learning 
rates and storage capacities, and make various tradeoffs sacrificing 
compression 
to meet memory and speed requirements. It is simple to describe the process of 
writing the code. You make random changes and keep the ones that work. It is 
like the way evolution works, except that there is a human in the loop to make 
the process a little more intelligent.

There are also fully automated optimizers for compression algorithms, but 
they are more limited in their search space. For example, the experimental PPM 
based EPM by Serge Osnach includes a program EPMOPT that adjusts 20 numeric 
parameters up or down using a hill climbing search to find the best 
compression. 
It can be very slow. Another program, M1X2 by Christopher Mattern, uses a 
context mixing (PAQ like) algorithm in which the contexts are selected by using 
a hill climbing genetic algorithm to select a set of 64-bit masks. One version 
was run for 3 days to find the best options to compress a file that normally 
takes 45 seconds.

 -- Matt Mahoney, matmaho...@yahoo.com 





 From: Steve Richfield 
steve.richfi...@gmail.com
To: agi 
agi@v2.listbox.com
Sent: Sun, June 20, 2010 2:06:55 
AM
Subject: [agi] An 
alternative plan to discover self-organization theory

No, I 
haven't been smokin' any wacky tobacy. Instead, I was having a long talk with 
my 
son Eddie, about self-organization theory. This is his proposal:

He suggested that I construct a simple NN that couldn't work 
without self organizing, and make dozens/hundreds of different neuron and 
synapse operational characteristics selectable ala genetic programming, put it 
on the fastest computer I could get my hands on, turn it loose trying arbitrary 
combinations of characteristics, and see what the winning combination turns 
out to be. Then, armed with that knowledge, refine the 

[agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread David Jones
Ignoring Steve because we are simply going to have to agree to disagree...
And I don't see enough value in trying to understand his paper. I said the
math was overly complex, but what I really meant is that the approach is
overly complex and so filled with research specific jargon, I don't care to
try understand it. It is overly converned with copying the way that the
brain does things. I don't care how the brain does it. I care about why the
brain does it. Its the same as the analogy of giving a man a fish or
teaching him to fish. You may figure out how the brain works, but it does
you little good if you don't understand why it works that way. You would
have to create a synthetic brain to take advantage of the knowledge, which
is not a approach to AGI for many reasons. There are a million other ways,
even better ways, to do it than the way the brain does it. Just because the
brain accidentally found 1 way out of a million to do it doesn't make it the
right way for us to develop AGI.

So, moving on

I can't find references online, but I've read that the Air Force studied the
ability of the human eye to identify aircraft in images that were flashed on
a screen at 1/220th of a second. So, clearly, the human eye can at least
distinguish 220 fps if it operated that way. Of course, it may not operate
on fps second, but that is besides the point. I've also heard other people
say that a study has shown that the human eye takes 1000 exposures per
second. They had no references though, so it is hearsay.

The point was that the brain takes advantage of the fact that with such a
high exposure rate, the changes between each image are very small if the
objects are moving. This allows it to distinguish movement and visual
changes with extremely low uncertainty. If it detects that the changes
required to match two parts of an image are too high or the distance between
matches is too far, it can reject a match. This allows it to distinguish
only very low uncertainty changes and reject changes that have high
uncertainty.

I think this is a very significant discovery regarding how the brain is able
to learn in such an ambiguous world with so many variables that are
difficult to disambiguate, interpret and understand.

Dave

On Fri, Jun 18, 2010 at 2:19 PM, David Jones davidher...@gmail.com wrote:

 I just came up with an awesome idea. I just realized that the brain takes
 advantage of high frame rates to reduce uncertainty when it is estimating
 motion. The slower the frame rate, the more uncertainty there is because
 objects may have traveled too far between images to match with high
 certainty using simple techniques.

 So, this made me think, what if the secret to the brain's ability to learn
 generally stems from this high frame rate trick. What if we made a system
 that could process even high frame rates than the brain can. By doing this
 you can reduce the uncertainty of matches very very low (well in my theory
 so far). If you can do that, then you can learn about the objects in a
 video, how they move together or separately with very high certainty.

 You see, matching is the main barrier when learning about objects. But with
 a very high frame rate, we can use a fast algorithm and could potentially
 reduce the uncertainty to almost nothing. Once we learn about objects,
 matching gets easier because now we have training data and experience to
 take advantage of.

 In addition, you can also gain knowledge about lighting, color variation,
 noise, etc. With that knowledge, you can then automatically create a model
 of the object with extremely high confidence. You will also be able to
 determine the effects of light and noise on the object's appearance, which
 will help match the object invariantly in the future. It allows you to
 determine what is expected and unexpected for the object's appearance with
 much higher confidence.

 Pretty cool idea huh?

 Dave




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread Matt Mahoney
Your computer monitor flashes 75 frames per second, but you don't notice any 
flicker because light sensing neurons have a response delay of about 100 ms. 
Motion detection begins in the retina by cells that respond to contrast between 
light and dark moving in specific directions computed by simple, fixed weight 
circuits. Higher up in the processing chain, you detect motion when your eyes 
and head smoothly track moving objects using kinesthetic feedback from your eye 
and neck muscles and input from your built in accelerometer in the semicircular 
canals in your ears. This is all very complicated of course. You are more 
likely to detect motion in objects that you recognize and expect to move, like 
people, animals, cars, etc.

 -- Matt Mahoney, matmaho...@yahoo.com





From: David Jones davidher...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 21, 2010 9:39:30 AM
Subject: [agi] Re: High Frame Rates Reduce Uncertainty

Ignoring Steve because we are simply going to have to agree to disagree... And 
I don't see enough value in trying to understand his paper. I said the math was 
overly complex, but what I really meant is that the approach is overly complex 
and so filled with research specific jargon, I don't care to try understand it. 
It is overly converned with copying the way that the brain does things. I don't 
care how the brain does it. I care about why the brain does it. Its the same as 
the analogy of giving a man a fish or teaching him to fish. You may figure out 
how the brain works, but it does you little good if you don't understand why it 
works that way. You would have to create a synthetic brain to take advantage of 
the knowledge, which is not a approach to AGI for many reasons. There are a 
million other ways, even better ways, to do it than the way the brain does it. 
Just because the brain accidentally found 1 way out of a million to do it 
doesn't make it the
 right way for us to develop AGI. 

So, moving on

I can't find references online, but I've read that the Air Force studied the 
ability of the human eye to identify aircraft in images that were flashed on a 
screen at 1/220th of a second. So, clearly, the human eye can at least 
distinguish 220 fps if it operated that way. Of course, it may not operate on 
fps second, but that is besides the point. I've also heard other people say 
that a study has shown that the human eye takes 1000 exposures per second. They 
had no references though, so it is hearsay.

The point was that the brain takes advantage of the fact that with such a high 
exposure rate, the changes between each image are very small if the objects are 
moving. This allows it to distinguish movement and visual changes with 
extremely low uncertainty. If it detects that the changes required to match two 
parts of an image are too high or the distance between matches is too far, it 
can reject a match. This allows it to distinguish only very low uncertainty 
changes and reject changes that have high uncertainty. 

I think this is a very significant discovery regarding how the brain is able to 
learn in such an ambiguous world with so many variables that are difficult to 
disambiguate, interpret and understand. 

Dave

On Fri, Jun 18, 2010 at 2:19 PM, David Jones davidher...@gmail.com wrote:

I just came up with an awesome idea. I just realized that the brain takes 
advantage of high frame rates to reduce uncertainty when it is estimating 
motion. The slower the frame rate, the more uncertainty there is because 
objects may have traveled too far between images to match with high certainty 
using simple techniques. 

So, this made me think, what if the secret to the brain's ability to learn 
generally stems from this high frame rate trick. What if we made a system that 
could process even high frame rates than the brain can. By doing this you can 
reduce the uncertainty of matches very very low (well in my theory so far). If 
you can do that, then you can learn about the objects in a video, how they 
move together or separately with very high certainty. 

You see, matching is the main barrier when learning about objects. But with a 
very high frame rate, we can use a fast algorithm and could potentially reduce 
the uncertainty to almost nothing. Once we learn about objects, matching gets 
easier because now we have training data and experience to take advantage of. 

In addition, you can also gain knowledge about lighting, color variation, 
noise, etc. With that knowledge, you can then automatically create a model of 
the object with extremely high confidence. You will also be able to determine 
the effects of light and noise on the object's appearance, which will help 
match the object invariantly in the future. It allows you to determine what is 
expected and unexpected for the object's appearance with much higher 
confidence. 

Pretty cool idea huh?

Dave


agi | Archives  | Modify Your Subscription  



Re: [agi] Re: High Frame Rates Reduce Uncertainty

2010-06-21 Thread David Jones
Thank you Matt. That's very useful input.

On Mon, Jun 21, 2010 at 9:57 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Your computer monitor flashes 75 frames per second, but you don't notice
 any flicker because light sensing neurons have a response delay of about 100
 ms. Motion detection begins in the retina by cells that respond to contrast
 between light and dark moving in specific directions computed by simple,
 fixed weight circuits. Higher up in the processing chain, you detect motion
 when your eyes and head smoothly track moving objects using kinesthetic
 feedback from your eye and neck muscles and input from your built in
 accelerometer in the semicircular canals in your ears. This is all very
 complicated of course. You are more likely to detect motion in objects that
 you recognize and expect to move, like people, animals, cars, etc.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* David Jones davidher...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, June 21, 2010 9:39:30 AM
 *Subject:* [agi] Re: High Frame Rates Reduce Uncertainty

 Ignoring Steve because we are simply going to have to agree to disagree...
 And I don't see enough value in trying to understand his paper. I said the
 math was overly complex, but what I really meant is that the approach is
 overly complex and so filled with research specific jargon, I don't care to
 try understand it. It is overly converned with copying the way that the
 brain does things. I don't care how the brain does it. I care about why the
 brain does it. Its the same as the analogy of giving a man a fish or
 teaching him to fish. You may figure out how the brain works, but it does
 you little good if you don't understand why it works that way. You would
 have to create a synthetic brain to take advantage of the knowledge, which
 is not a approach to AGI for many reasons. There are a million other ways,
 even better ways, to do it than the way the brain does it. Just because the
 brain accidentally found 1 way out of a million to do it doesn't make it the
 right way for us to develop AGI.

 So, moving on

 I can't find references online, but I've read that the Air Force studied
 the ability of the human eye to identify aircraft in images that were
 flashed on a screen at 1/220th of a second. So, clearly, the human eye can
 at least distinguish 220 fps if it operated that way. Of course, it may not
 operate on fps second, but that is besides the point. I've also heard other
 people say that a study has shown that the human eye takes 1000 exposures
 per second. They had no references though, so it is hearsay.

 The point was that the brain takes advantage of the fact that with such a
 high exposure rate, the changes between each image are very small if the
 objects are moving. This allows it to distinguish movement and visual
 changes with extremely low uncertainty. If it detects that the changes
 required to match two parts of an image are too high or the distance between
 matches is too far, it can reject a match. This allows it to distinguish
 only very low uncertainty changes and reject changes that have high
 uncertainty.

 I think this is a very significant discovery regarding how the brain is
 able to learn in such an ambiguous world with so many variables that are
 difficult to disambiguate, interpret and understand.

 Dave

 On Fri, Jun 18, 2010 at 2:19 PM, David Jones davidher...@gmail.comwrote:

 I just came up with an awesome idea. I just realized that the brain takes
 advantage of high frame rates to reduce uncertainty when it is estimating
 motion. The slower the frame rate, the more uncertainty there is because
 objects may have traveled too far between images to match with high
 certainty using simple techniques.

 So, this made me think, what if the secret to the brain's ability to learn
 generally stems from this high frame rate trick. What if we made a system
 that could process even high frame rates than the brain can. By doing this
 you can reduce the uncertainty of matches very very low (well in my theory
 so far). If you can do that, then you can learn about the objects in a
 video, how they move together or separately with very high certainty.

 You see, matching is the main barrier when learning about objects. But
 with a very high frame rate, we can use a fast algorithm and could
 potentially reduce the uncertainty to almost nothing. Once we learn about
 objects, matching gets easier because now we have training data and
 experience to take advantage of.

 In addition, you can also gain knowledge about lighting, color variation,
 noise, etc. With that knowledge, you can then automatically create a model
 of the object with extremely high confidence. You will also be able to
 determine the effects of light and noise on the object's appearance, which
 will help match the object invariantly in the future. It allows you to
 determine what is expected and unexpected for the object's 

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread deepakjnath
The brain does not get the high frame rate signals as the eye itself
only gives brain images at 24 frames per second. Else u wouldn't be
able to watch a movie.
Any comments?

On 6/21/10, Matt Mahoney matmaho...@yahoo.com wrote:
 Your computer monitor flashes 75 frames per second, but you don't notice any
 flicker because light sensing neurons have a response delay of about 100 ms.
 Motion detection begins in the retina by cells that respond to contrast
 between light and dark moving in specific directions computed by simple,
 fixed weight circuits. Higher up in the processing chain, you detect motion
 when your eyes and head smoothly track moving objects using kinesthetic
 feedback from your eye and neck muscles and input from your built in
 accelerometer in the semicircular canals in your ears. This is all very
 complicated of course. You are more likely to detect motion in objects that
 you recognize and expect to move, like people, animals, cars, etc.

  -- Matt Mahoney, matmaho...@yahoo.com




 
 From: David Jones davidher...@gmail.com
 To: agi agi@v2.listbox.com
 Sent: Mon, June 21, 2010 9:39:30 AM
 Subject: [agi] Re: High Frame Rates Reduce Uncertainty

 Ignoring Steve because we are simply going to have to agree to disagree...
 And I don't see enough value in trying to understand his paper. I said the
 math was overly complex, but what I really meant is that the approach is
 overly complex and so filled with research specific jargon, I don't care to
 try understand it. It is overly converned with copying the way that the
 brain does things. I don't care how the brain does it. I care about why the
 brain does it. Its the same as the analogy of giving a man a fish or
 teaching him to fish. You may figure out how the brain works, but it does
 you little good if you don't understand why it works that way. You would
 have to create a synthetic brain to take advantage of the knowledge, which
 is not a approach to AGI for many reasons. There are a million other ways,
 even better ways, to do it than the way the brain does it. Just because the
 brain accidentally found 1 way out of a million to do it doesn't make it the
  right way for us to develop AGI.

 So, moving on

 I can't find references online, but I've read that the Air Force studied the
 ability of the human eye to identify aircraft in images that were flashed on
 a screen at 1/220th of a second. So, clearly, the human eye can at least
 distinguish 220 fps if it operated that way. Of course, it may not operate
 on fps second, but that is besides the point. I've also heard other people
 say that a study has shown that the human eye takes 1000 exposures per
 second. They had no references though, so it is hearsay.

 The point was that the brain takes advantage of the fact that with such a
 high exposure rate, the changes between each image are very small if the
 objects are moving. This allows it to distinguish movement and visual
 changes with extremely low uncertainty. If it detects that the changes
 required to match two parts of an image are too high or the distance between
 matches is too far, it can reject a match. This allows it to distinguish
 only very low uncertainty changes and reject changes that have high
 uncertainty.

 I think this is a very significant discovery regarding how the brain is able
 to learn in such an ambiguous world with so many variables that are
 difficult to disambiguate, interpret and understand.

 Dave

 On Fri, Jun 18, 2010 at 2:19 PM, David Jones davidher...@gmail.com wrote:

I just came up with an awesome idea. I just realized that the brain takes
 advantage of high frame rates to reduce uncertainty when it is estimating
 motion. The slower the frame rate, the more uncertainty there is because
 objects may have traveled too far between images to match with high
 certainty using simple techniques.

So, this made me think, what if the secret to the brain's ability to learn
 generally stems from this high frame rate trick. What if we made a system
 that could process even high frame rates than the brain can. By doing this
 you can reduce the uncertainty of matches very very low (well in my theory
 so far). If you can do that, then you can learn about the objects in a
 video, how they move together or separately with very high certainty.

You see, matching is the main barrier when learning about objects. But with
 a very high frame rate, we can use a fast algorithm and could potentially
 reduce the uncertainty to almost nothing. Once we learn about objects,
 matching gets easier because now we have training data and experience to
 take advantage of.

In addition, you can also gain knowledge about lighting, color variation,
 noise, etc. With that knowledge, you can then automatically create a model
 of the object with extremely high confidence. You will also be able to
 determine the effects of light and noise on the object's appearance, which
 will help match the object 

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread David Jones
I'm reading about the retina motion processing. Maybe the brain does only
get 24 frames per second, but the retina may send it information about
hypothesized or likely movements. The brain can then do some further
processing, such as using including kinesthetic feedback that tells the
brain about how the body moved during the time that the images were
captured, which Matt mentioned (thanks again).

Maybe the important question here is could we potentially create a real
camera system to capture extremely high frame rates for the purposes of
generating vast amounts of very low uncertainty training data that could be
used to create algorithms that do not require such high frame rates. With
such training data, you could automatically test algorithms much more
quickly. You could potentially even use genetic algorithms to find the right
solution, although I'm not a big fan of such an approach.

To solve screenshot computer vision, I could potentially generate
screenshots with very high frame rates for training data also. Even if the
process is supervised, it could probably generate vastly more training data
than we've ever had before.

Dave

On Mon, Jun 21, 2010 at 10:30 AM, deepakjnath deepakjn...@gmail.com wrote:

 The brain does not get the high frame rate signals as the eye itself
 only gives brain images at 24 frames per second. Else u wouldn't be
 able to watch a movie.
 Any comments?

 On 6/21/10, Matt Mahoney matmaho...@yahoo.com wrote:
  Your computer monitor flashes 75 frames per second, but you don't notice
 any
  flicker because light sensing neurons have a response delay of about 100
 ms.
  Motion detection begins in the retina by cells that respond to contrast
  between light and dark moving in specific directions computed by simple,
  fixed weight circuits. Higher up in the processing chain, you detect
 motion
  when your eyes and head smoothly track moving objects using kinesthetic
  feedback from your eye and neck muscles and input from your built in
  accelerometer in the semicircular canals in your ears. This is all very
  complicated of course. You are more likely to detect motion in objects
 that
  you recognize and expect to move, like people, animals, cars, etc.
 
   -- Matt Mahoney, matmaho...@yahoo.com
 
 
 
 
  
  From: David Jones davidher...@gmail.com
  To: agi agi@v2.listbox.com
  Sent: Mon, June 21, 2010 9:39:30 AM
  Subject: [agi] Re: High Frame Rates Reduce Uncertainty
 
  Ignoring Steve because we are simply going to have to agree to
 disagree...
  And I don't see enough value in trying to understand his paper. I said
 the
  math was overly complex, but what I really meant is that the approach is
  overly complex and so filled with research specific jargon, I don't care
 to
  try understand it. It is overly converned with copying the way that the
  brain does things. I don't care how the brain does it. I care about why
 the
  brain does it. Its the same as the analogy of giving a man a fish or
  teaching him to fish. You may figure out how the brain works, but it does
  you little good if you don't understand why it works that way. You would
  have to create a synthetic brain to take advantage of the knowledge,
 which
  is not a approach to AGI for many reasons. There are a million other
 ways,
  even better ways, to do it than the way the brain does it. Just because
 the
  brain accidentally found 1 way out of a million to do it doesn't make it
 the
   right way for us to develop AGI.
 
  So, moving on
 
  I can't find references online, but I've read that the Air Force studied
 the
  ability of the human eye to identify aircraft in images that were flashed
 on
  a screen at 1/220th of a second. So, clearly, the human eye can at least
  distinguish 220 fps if it operated that way. Of course, it may not
 operate
  on fps second, but that is besides the point. I've also heard other
 people
  say that a study has shown that the human eye takes 1000 exposures per
  second. They had no references though, so it is hearsay.
 
  The point was that the brain takes advantage of the fact that with such a
  high exposure rate, the changes between each image are very small if the
  objects are moving. This allows it to distinguish movement and visual
  changes with extremely low uncertainty. If it detects that the changes
  required to match two parts of an image are too high or the distance
 between
  matches is too far, it can reject a match. This allows it to distinguish
  only very low uncertainty changes and reject changes that have high
  uncertainty.
 
  I think this is a very significant discovery regarding how the brain is
 able
  to learn in such an ambiguous world with so many variables that are
  difficult to disambiguate, interpret and understand.
 
  Dave
 
  On Fri, Jun 18, 2010 at 2:19 PM, David Jones davidher...@gmail.com
 wrote:
 
 I just came up with an awesome idea. I just realized that the brain takes
  advantage of high frame rates 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Abram Demski
Steve,

You didn't mention this, so I guess I will: larger animals do generally have
larger brains, coming close to a fixed brain/body ratio. Smarter animals
appear to be the ones with a higher brain/body ratio rather than simply a
larger brain. This to me suggests that the amount of sensory information and
muscle coordination necessary is the most important determiner of the amount
of processing power needed. There could be other interpretations, however.

It's also pretty important to say that brains are expensive to fuel. It's
probably the case that other animals didn't get as smart as us because the
additional food they could get per ounce brain was less than the additional
food needed to support an ounce of brain. Humans were in a situation in
which it was more. So, I don't think your argument from other animals
supports your hypothesis terribly well.

One way around your instability if it exists would be (similar to your
hemisphere suggestion) split the network into a number of individuals which
cooperate through very low-bandwidth connections. This would be like an
organization of humans working together. Hence, multiagent systems would
have a higher stability limit. However, it is still the case that we hit a
serious diminishing-returns scenario once we needed to start doing this
(since the low-bandwidth connections convey so much less info, we need waaay
more processing power for every IQ point or whatever). And, once these
organizations got really big, it's quite plausible that they'd have their
own stability issues.

--Abram

On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield steve.richfi...@gmail.com
 wrote:

 There has been an ongoing presumption that more brain (or computer) means
 more intelligence. I would like to question that underlying presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand a
 snowball's chance in hell of ever outperforming humans, UNTIL the underlying
 network stability theory is well enough understood to perform perfectly to
 digital precision. This wouldn't necessarily have to address all aspects of
 intelligence, but would at minimum have to address large-scale network
 stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability issues. While not fully understood, the primary challenge appears
 (to me) to be that the various control mechanisms (that includes humans in
 the loop) violate a basic requirement for feedback stability, namely, that
 the frequency response not drop off faster then 12db/octave at any
 frequency. Present control systems make binary all-or-nothing decisions that
 produce astronomical high-frequency components (edges and glitches) related
 to much lower-frequency phenomena (like overall demand). Other systems then
 attempt to deal with these edges and glitches, with predictable poor
 results. Like the stock market crash of May 6, there is a list of dates of
 major outages and near-outages, where the failures are poorly understood. In
 some cases, the lights stayed on, but for a few seconds came ever SO close
 to a widespread outage that dozens of articles were written about them, with
 apparently no one understanding things even to the basic level that I am
 explaining here.

 Hence, a single theoretical insight might guide both power grid development
 and AGI development. For example, perhaps there is a necessary capability of
 components in large 

[agi] Fwd: AGI question

2010-06-21 Thread rob levy
Hi

I'm new to this list, but I've been thinking about consciousness, cognition
and AI for about half of my life (I'm 32 years old).  As is probably the
case for many of us here, my interests began with direct recognition of the
depth and wonder of varieties of phenomenological experiences-- and
attempting to comprehend how these constellations of significance fit in
with a larger picture of what we can reliably know about the natural world.


I am secondarily motivated by the fact that (considerations of morality or
amorality aside) AGI is inevitable, though it is far from being a forgone
conclusion that powerful general thinking machines will have a first-hand
subjective relationship to a world, as living creatures do-- and therefore
it is vital that we do as well as possible in understanding what makes
systems conscious.  A zombie machine intelligence singularity is something
I would refer to rather as a holocaust, even if no one were directly
killed, assuming these entities could ultimately prevail over the previous
forms of life on our planet.

I'm sure I'm not the only one on this list who sees a behavioral/ecological
level of analysis as the most likely correct level at which to study
perception and cognition, and perception as being a kind of active
relationship between an organism and an environment.  Having thoroughly
convinced my self of a non-dualist, embodied, externalist perspective on
cognition, I turn to the nature of life itself (and possibly even physics
but maybe that level will not be necessary) to make sense of the nature of
subjectivity.  I like Bohm's or Bateson's panpsychism about systems as
wholes, and significance as informational distinctions (which it would be
natural to understand as being the basis of subjective experience), but this
is descriptive rather than explanatory.

I am not a biologist, but I am increasingly interested in finding answers to
what it is about living organisms that gives them a unity such that
something is something to the system as a whole.  The line of
investigation that theoretical biologists like Robert Rosen and other
NLDS/chaos people have pursued is interesting, but I am unfamiliar with
related work that might have made more progress on the system-level
properties that give life its characteristic unity and system-level
responsiveness.  To me, this seems the most likely candidate for a paradigm
shift that would produce AGI.  In contrast I'm not particularly convinced
that modeling a brain is a good way to get AGI, although I'd guess we could
learn a few more things about the coordination of complex behavior if we
could really understand them.

Another way to put this is that obviously evolutionary computation would be
more than just boring hill-climbing if we knew what an organism even IS
(perhaps in a more precise computational sense). If we can know what an
organism is then it should be (maybe) trivial to model concepts,
consciousness, and high level semantics to the umpteenth degree, or at least
this would be a major hurtle I think.

Even assuming a solution to the problem posed above, there is still plenty
of room for other minds skepticism in non-living entities implemented on
questionably foreign mediums but there would be a lot more reason to sleep
well that the science/technology is leading in a direction in which
questions about subjectivity could be meaningfully investigated.

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread rob levy
(I'm a little late in this conversation.  I tried to send this message the
other day but I had my list membership configured wrong. -Rob)

-- Forwarded message --
From: rob levy r.p.l...@gmail.com
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover self-organization theory
To: agi@v2.listbox.com


On a related note, what is everyone's opinion on why evolutionary algorithms
are such a miserable failure as creative machines, despite their successes
in narrow optimization problems?

I don't want to conflate the possibly separable problems of biological
development and evolution, though they are interrelated.  There are various
approaches to evolutionary theory such as Lima de Faria's evolution without
selection ideas and Reid's evolution by natural experiment that suggest
natural selection is not  all it's cracked up to be, and that the step of
generating, (mutating, combining, ) is where the more interesting
stuff happens.  Most of the alternatives to Neodarwinian Synthesis I have
seen are based in dynamic models of emergence in complex systems. The upshot
is, you don't get creativity for free, you actually still need to solve a
problem that is as hard as AGI in order to get creativity for free.

So, you would need to solve the AGI-hard problem of evolution and
development of life, in order to then solve AGI itself (reminds me of the
old SNL sketch: first, get a million dollars...).  Also, my hunch is that
there is quite a bit of overlap between the solutions to the two problems.

Rob

Disclaimer: I'm discussing things above that I'm not and don't claim to be
an expert in, but from what I have seen so far on this list, that should be
alright.  AGI is by its nature very multidisciplinary which necessitates
often being breadth-first, and therefore shallow in some areas.


On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is *his*proposal:

 He suggested that I construct a simple NN that couldn't work without self
 organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be. Then, armed with that knowledge, refine the genetic
 characteristics and do it again, and iterate until it *efficiently* self
 organizes. This might go on for months, but self-organization theory might
 just emerge from such an effort. I had a bunch of objections to his
 approach, e.g.

 Q.  What if it needs something REALLY strange to work?
 A.  Who better than you to come up with a long list of really strange
 functionality?

 Q.  There are at least hundreds of bits in the genome.
 A.  Try combinations in pseudo-random order, with each bit getting asserted
 in ~half of the tests. If/when you stumble onto a combination that sort of
 works, switch to varying the bits one-at-a-time, and iterate in this way
 until the best combination is found.

 Q.  Where are we if this just burns electricity for a few months and finds
 nothing?
 A.  Print out the best combination, break out the wacky tobacy, and come up
 with even better/crazier parameters to test.

 I have never written a line of genetic programming, but I know that others
 here have. Perhaps you could bring some rationality to this discussion?

 What would be a simple NN that needs self-organization? Maybe a small
 pot of neurons that could only work if they were organized into layers,
 e.g. a simple 64-neuron system that would work as a 4x4x4-layer visual
 recognition system, given the input that I fed it?

 Any thoughts on how to score partial successes?

 Has anyone tried anything like this in the past?

 Is anyone here crazy enough to want to help with such an effort?

 This Monte Carlo approach might just be simple enough to work, and simple
 enough that it just HAS to be tried.

 All thoughts, stones, and rotten fruit will be gratefully appreciated.

 Thanks in advance.

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 My underlying thought here is that we may all be working on the wrong
 problems. Instead of working on the particular analysis methods (AGI) or
 self-organization theory (NN), perhaps if someone found a solution to
large-
 network stability, then THAT would show everyone the ways to their
 respective goals.
 

For a distributed AGI this is a fundamental problem. Difference is that a
power grid is such a fixed network. A distributed AGI need not be that
fixed, it could lose chunks of itself but grow them out somewhere else.
Though a distributed AGI could be required to run as a fixed network. 

Some traditional telecommunications networks are power grid like. They have
a drastic amount of stability and healing functions built-in as have been
added over time. 

Solutions for large-scale network stabilities would vary per network
topology, function, etc.. Virtual networks play a large part, this would be
related to the network's ability to reconstruct itself meaning knowing how
to heal, reroute, optimize and grow.. 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Abram,

On Mon, Jun 21, 2010 at 8:38 AM, Abram Demski abramdem...@gmail.com wrote:

 Steve,

 You didn't mention this, so I guess I will: larger animals do generally
 have larger brains, coming close to a fixed brain/body ratio. Smarter
 animals appear to be the ones with a higher brain/body ratio rather than
 simply a larger brain. This to me suggests that the amount of sensory
 information and muscle coordination necessary is the most important
 determiner of the amount of processing power needed. There could be other
 interpretations, however.


It is REALLY hard to compare the intelligence of various animals, because of
their innate behavior being overlaid. For example, based on ability to
follow instruction, cats must be REALLY stupid.


 It's also pretty important to say that brains are expensive to fuel. It's
 probably the case that other animals didn't get as smart as us because the
 additional food they could get per ounce brain was less than the additional
 food needed to support an ounce of brain. Humans were in a situation in
 which it was more. So, I don't think your argument from other animals
 supports your hypothesis terribly well.


Presuming for a moment that you are right, then there will be no
singularity! No, this is NOT a reductio ad absurdum proof either way. Why
no singularity?

If there really is a limit to the value of intelligence, then why should we
think that there will be anything special about super-intelligence? Perhaps
we have been deluding ourselves because we want to think that the reason we
aren't all rich is because we just aren't smart enough, when in reality some
entirely different phenomenon may be key? Have YOU observed that success in
life is highly correlated to intelligence?


 One way around your instability if it exists would be (similar to your
 hemisphere suggestion) split the network into a number of individuals which
 cooperate through very low-bandwidth connections.


While helping breadth of analysis, this would seem to absolutely limit
analysis depth to that of one individual.

This would be like an organization of humans working together. Hence,
 multiagent systems would have a higher stability limit.


Providing they don't get into a war of some sort.


 However, it is still the case that we hit a serious diminishing-returns
 scenario once we needed to start doing this (since the low-bandwidth
 connections convey so much less info, we need waaay more processing power
 for every IQ point or whatever).


I see more problems with analysis depth than with bandwidth limitations.


 And, once these organizations got really big, it's quite plausible that
 they'd have their own stability issues.


Yes.

Steve


 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Jim Bromer
I think a real world solution to grid stability would require greater use of
sensory devices (and a some sensory-feedback devices). I really don't know
for sure, but my assumption is that electrical grid management has relied
mostly on the electrical reactions of the grid itself, and here you are
saying that is just not good enough for critical fluctuations in 2010.  So
while software is also necessary of course, the first change in how grid
management should be done is through greater reliance on off-the-grid (or at
minimal backup on-grid) sensory devices.  I am quite confident, without
knowing anything about the subject, that that is what needs to be done
because I understand a little about how different groups of people work and
I have seen how sensory devices like gps and lidar have fundamentally
changed AI projects because they allowed time sensitive critical analysis
that was too slow and for contemporary AI to solve.  100 years from now,
electrical grid management won't require another layer of sensors because
the software analysis of grid fluctuations will be sufficient. On the other
hand, grid managers will not remove these additional layers of sensors from
the grid a hundred years from now anymore than we telephone engineers would
suggest that maybe they should stop using fiber optics because they could
get back to 1990 fiber optic capacity and reliability using copper wire with
today's switching and software devices.
Jim Bromer
On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield steve.richfi...@gmail.com
 wrote:

 There has been an ongoing presumption that more brain (or computer) means
 more intelligence. I would like to question that underlying presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand a
 snowball's chance in hell of ever outperforming humans, UNTIL the underlying
 network stability theory is well enough understood to perform perfectly to
 digital precision. This wouldn't necessarily have to address all aspects of
 intelligence, but would at minimum have to address large-scale network
 stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability issues. While not fully understood, the primary challenge appears
 (to me) to be that the various control mechanisms (that includes humans in
 the loop) violate a basic requirement for feedback stability, namely, that
 the frequency response not drop off faster then 12db/octave at any
 frequency. Present control systems make binary all-or-nothing decisions that
 produce astronomical high-frequency components (edges and glitches) related
 to much lower-frequency phenomena (like overall demand). Other systems then
 attempt to deal with these edges and glitches, with predictable poor
 results. Like the stock market crash of May 6, there is a list of dates of
 major outages and near-outages, where the failures are poorly understood. In
 some cases, the lights stayed on, but for a few seconds came ever SO close
 to a widespread outage that dozens of articles were written about them, with
 apparently no one understanding things even to the basic level that I am
 explaining here.

 Hence, a single theoretical insight might guide both power grid development
 and AGI development. For example, perhaps there is a necessary capability of
 components in large networks, to be able to custom tailor their frequency
 response curves to not participate on unstable 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

Your comments appear to be addressing reliability, rather than stability...

On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
  My underlying thought here is that we may all be working on the wrong
  problems. Instead of working on the particular analysis methods (AGI) or
  self-organization theory (NN), perhaps if someone found a solution to
 large-
  network stability, then THAT would show everyone the ways to their
  respective goals.
 

 For a distributed AGI this is a fundamental problem. Difference is that a
 power grid is such a fixed network.


Not really. Switches may connect or disconnect Canada, equipment is
constantly failing and being repaired, etc. In any case, this doesn't seem
to be related to stability, other than it being a lot easier to analyze a
fixed network rather than a variable network.


 A distributed AGI need not be that
 fixed, it could lose chunks of itself but grow them out somewhere else.
 Though a distributed AGI could be required to run as a fixed network.

 Some traditional telecommunications networks are power grid like. They have
 a drastic amount of stability and healing functions built-in as have been
 added over time.


However, there is no feedback, so stability isn't even a potential issue.


 Solutions for large-scale network stabilities would vary per network
 topology, function, etc..


However, there ARE some universal rules, like the 12db/octave requirement.


 Virtual networks play a large part, this would be
 related to the network's ability to reconstruct itself meaning knowing how
 to heal, reroute, optimize and grow..


Again, this doesn't seem to relate to millisecond-by-millisecond stability.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Fwd: AGI question

2010-06-21 Thread Matt Mahoney
rob levy wrote:
 I am secondarily motivated by the fact that (considerations of morality or 
 amorality aside) AGI is inevitable, though it is far from being a forgone 
 conclusion that powerful general thinking machines will have a first-hand 
 subjective relationship to a world, as living creatures do-- and therefore it 
 is vital that we do as well as possible in understanding what makes systems 
 conscious.  A zombie machine intelligence singularity is something I would 
 refer to rather as a holocaust, even if no one were directly killed, 
 assuming these entities could ultimately prevail over the previous forms of 
 life on our planet.

What do you mean by conscious? If your brain were removed and replaced by a 
functionally equivalent computer that simulated your behavior (presumably a 
zombie), how would you be any different? Why would it matter?

 -- Matt Mahoney, matmaho...@yahoo.com





From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 21, 2010 11:53:29 AM
Subject: [agi] Fwd: AGI question

Hi


I'm new to this list, but I've been thinking about consciousness, cognition and 
AI for about half of my life (I'm 32 years old).  As is probably the case for 
many of us here, my interests began with direct recognition of the depth and 
wonder of varieties of phenomenological experiences-- and attempting to 
comprehend how these constellations of significance fit in with a larger 
picture of what we can reliably know about the natural world.  

I am secondarily motivated by the fact that (considerations of morality or 
amorality aside) AGI is inevitable, though it is far from being a forgone 
conclusion that powerful general thinking machines will have a first-hand 
subjective relationship to a world, as living creatures do-- and therefore it 
is vital that we do as well as possible in understanding what makes systems 
conscious.  A zombie machine intelligence singularity is something I would 
refer to rather as a holocaust, even if no one were directly killed, assuming 
these entities could ultimately prevail over the previous forms of life on our 
planet.

I'm sure I'm not the only one on this list who sees a behavioral/ecological 
level of analysis as the most likely correct level at which to study perception 
and cognition, and perception as being a kind of active relationship between an 
organism and an environment.  Having thoroughly convinced my self of a 
non-dualist, embodied, externalist perspective on cognition, I turn to the 
nature of life itself (and possibly even physics but maybe that level will not 
be necessary) to make sense of the nature of subjectivity.  I like Bohm's or 
Bateson's panpsychism about systems as wholes, and significance as 
informational distinctions (which it would be natural to understand as being 
the basis of subjective experience), but this is descriptive rather than 
explanatory.

I am not a biologist, but I am increasingly interested in finding answers to 
what it is about living organisms that gives them a unity such that something 
is something to the system as a whole.  The line of investigation that 
theoretical biologists like Robert Rosen and other NLDS/chaos people have 
pursued is interesting, but I am unfamiliar with related work that might have 
made more progress on the system-level properties that give life its 
characteristic unity and system-level responsiveness.  To me, this seems the 
most likely candidate for a paradigm shift that would produce AGI.  In contrast 
I'm not particularly convinced that modeling a brain is a good way to get AGI, 
although I'd guess we could learn a few more things about the coordination of 
complex behavior if we could really understand them.

Another way to put this is that obviously evolutionary computation would be 
more than just boring hill-climbing if we knew what an organism even IS 
(perhaps in a more precise computational sense). If we can know what an 
organism is then it should be (maybe) trivial to model concepts, consciousness, 
and high level semantics to the umpteenth degree, or at least this would be a 
major hurtle I think.

Even assuming a solution to the problem posed above, there is still plenty of 
room for other minds skepticism in non-living entities implemented on 
questionably foreign mediums but there would be a lot more reason to sleep well 
that the science/technology is leading in a direction in which questions about 
subjectivity could be meaningfully investigated.

Rob


agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Jim,

Yours is the prevailing view in the industry. However, it doesn't seem to
work. Even given months of time to analyze past failures, they are often
unable to divine rules that would have reliably avoided the problems. In
short, until you adequately understand the system that your sensors are
sensing, all the readings in the world won't help. Further, when a system is
fundamentally unstable, you must have a control system that completely deals
with the instability, or it absolutely will fail. The present system meets
neither of these criteria.

There is another MAJOR issue. Presuming a power control center in the middle
of the U.S., the round-trip time at the speed of light to each coast is
~16ms, or two half-cycles at 60Hz. In control terms, that is an eternity.
Distributed control requires fundamental stability to function reliably.
Times can be improved by having separate control systems for each coast, but
the interface would still have to meet fundamental stability criteria (like
limiting the rates of change), and our long coasts would still require a
full half-cycle of time to respond.

Note that faults must be responded to QUICKLY to save the equipment, and so
cannot be left to central control systems to operate.

So, we end up with the system we now have, that does NOT meet reasonable
stability criteria. Hence, we may forever have occasional outages until the
system is radically re-conceived.

Steve
==
On Mon, Jun 21, 2010 at 9:17 AM, Jim Bromer jimbro...@gmail.com wrote:

 I think a real world solution to grid stability would require greater use
 of sensory devices (and a some sensory-feedback devices). I really don't
 know for sure, but my assumption is that electrical grid management has
 relied mostly on the electrical reactions of the grid itself, and here you
 are saying that is just not good enough for critical fluctuations in 2010.
 So while software is also necessary of course, the first change in how grid
 management should be done is through greater reliance on off-the-grid (or at
 minimal backup on-grid) sensory devices.  I am quite confident, without
 knowing anything about the subject, that that is what needs to be done
 because I understand a little about how different groups of people work and
 I have seen how sensory devices like gps and lidar have fundamentally
 changed AI projects because they allowed time sensitive critical analysis
 that was too slow and for contemporary AI to solve.  100 years from now,
 electrical grid management won't require another layer of sensors because
 the software analysis of grid fluctuations will be sufficient. On the other
 hand, grid managers will not remove these additional layers of sensors from
 the grid a hundred years from now anymore than we telephone engineers would
 suggest that maybe they should stop using fiber optics because they could
 get back to 1990 fiber optic capacity and reliability using copper wire with
 today's switching and software devices.
 Jim Bromer
 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There 

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread Matt Mahoney
rob levy wrote:
 On a related note, what is everyone's opinion on why evolutionary algorithms 
 are such a miserable failure as creative machines, despite their successes 
 in narrow optimization problems?

Lack of computing power. How much computation would you need to simulate the 3 
billion years of evolution that created human intelligence?

 -- Matt Mahoney, matmaho...@yahoo.com





From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Mon, June 21, 2010 11:56:53 AM
Subject: Re: [agi] An alternative plan to discover self-organization theory

(I'm a little late in this conversation.  I tried to send this message the 
other day but I had my list membership configured wrong. -Rob)


-- Forwarded message --
From: rob levy r.p.l...@gmail.com
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover self-organization theory
To: agi@v2.listbox.com


On a related note, what is everyone's opinion on why evolutionary algorithms 
are such a miserable failure as creative machines, despite their successes in 
narrow optimization problems?


I don't want to conflate the possibly separable problems of biological 
development and evolution, though they are interrelated.  There are various 
approaches to evolutionary theory such as Lima de Faria's evolution without 
selection ideas and Reid's evolution by natural experiment that suggest 
natural selection is not  all it's cracked up to be, and that the step of 
generating, (mutating, combining, ) is where the more interesting stuff 
happens.  Most of the alternatives to Neodarwinian Synthesis I have seen are 
based in dynamic models of emergence in complex systems. The upshot is, you 
don't get creativity for free, you actually still need to solve a problem that 
is as hard as AGI in order to get creativity for free. 


So, you would need to solve the AGI-hard problem of evolution and development 
of life, in order to then solve AGI itself (reminds me of the old SNL sketch: 
first, get a million dollars...).  Also, my hunch is that there is quite a 
bit of overlap between the solutions to the two problems.

Rob

Disclaimer: I'm discussing things above that I'm not and don't claim to be an 
expert in, but from what I have seen so far on this list, that should be 
alright.  AGI is by its nature very multidisciplinary which necessitates often 
being breadth-first, and therefore shallow in some areas.




On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield steve.richfi...@gmail.com 
wrote:

No, I haven't been smokin' any wacky tobacy. Instead, I was having a long talk 
with my son Eddie, about self-organization theory. This is his proposal:

He suggested that I construct a simple NN that couldn't work without self 
organizing, and make dozens/hundreds of different neuron and synapse 
operational characteristics selectable ala genetic programming, put it on the 
fastest computer I could get my hands on, turn it loose trying arbitrary 
combinations of characteristics, and see what the winning combination turns 
out to be. Then, armed with that knowledge, refine the genetic characteristics 
and do it again, and iterate until it efficiently self organizes. This might 
go on for months, but self-organization theory might just emerge from such an 
effort. I had a bunch of objections to his approach, e.g.

Q.  What if it needs something REALLY strange to work?
A.  Who better than you to come up with a long list of really strange 
functionality?

Q.  There are at least hundreds of bits in the genome.


A.  Try combinations in pseudo-random order, with each bit getting asserted in 
~half of the tests. If/when you stumble onto a combination that sort of works, 
switch to varying the bits one-at-a-time, and iterate in this way until the 
best combination is found.

Q.  Where are we if this just burns electricity for a few months and finds 
nothing?
A.  Print out the best combination, break out the wacky tobacy, and come up 
with even better/crazier parameters to test.

I have never written a line of genetic programming, but I know that others 
here have. Perhaps you could bring some rationality to this discussion?

What would be a simple NN that needs self-organization? Maybe a small pot 
of neurons that could only work if they were organized into layers, e.g. a 
simple 64-neuron system that would work as a 4x4x4-layer visual recognition 
system, given the input that I fed it?

Any thoughts on how to score partial successes?

Has anyone tried anything like this in the past?

Is anyone here crazy enough to want to help with such an effort?

This Monte Carlo approach might just be simple enough to work, and simple 
enough that it just HAS to be tried.

All thoughts, stones, and rotten fruit will be gratefully appreciated.

Thanks in advance.

Steve

 

agi | Archives   | Modify  Your Subscription  


agi | Archives  | Modify Your Subscription  


---
agi

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 John,
 
 Your comments appear to be addressing reliability, rather than
stability...

Both can be very interrelated. It can be an oversimplification to separate
them, or too impractical/theoretical. 

 On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.com
 wrote:
  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
  My underlying thought here is that we may all be working on the wrong
  problems. Instead of working on the particular analysis methods (AGI) or
  self-organization theory (NN), perhaps if someone found a solution to
 large-
  network stability, then THAT would show everyone the ways to their
  respective goals.
 
 For a distributed AGI this is a fundamental problem. Difference is that a
 power grid is such a fixed network.
 
 Not really. Switches may connect or disconnect Canada, equipment is
 constantly failing and being repaired, etc. In any case, this doesn't seem
to be
 related to stability, other than it being a lot easier to analyze a fixed
network
 rather than a variable network.
 

There are a fixed amount of copper wires going into a node. 

The network is usually a hierarchy of networks. Fixed may be more
limiting, sophisticated and kludged rendering it more difficult to deal with
so don't assume.

 A distributed AGI need not be that
 fixed, it could lose chunks of itself but grow them out somewhere else.
 Though a distributed AGI could be required to run as a fixed network.
 
 Some traditional telecommunications networks are power grid like. They
 have
 a drastic amount of stability and healing functions built-in as have been
 added over time.
 
 However, there is no feedback, so stability isn't even a potential issue.

No feedback? Remember some traditional telecommunications networks run over
copper with power, and are analog; there are huge feedback issues of which
many taken care of at a lower signaling level or with external equipment
such as echo-cancellers. Again though, there is a hierarchy and mesh of
various networks here. I've suggested traditional telecommunications since
they are vastly more complex, real-time and many other networks have learned
from it.

 
 Solutions for large-scale network stabilities would vary per network
 topology, function, etc..
 
 However, there ARE some universal rules, like the 12db/octave requirement.
 

Really? Do networks such as botnets really care about this? Or does it
apply?

 Virtual networks play a large part, this would be
 related to the network's ability to reconstruct itself meaning knowing how
 to heal, reroute, optimize and grow..
 
 Again, this doesn't seem to relate to millisecond-by-millisecond
stability.
 

It could be as the virtual network might contain images of the actual
network, as an internal model and use this for changing the network
structure for a more stable one if there were timing issues...

Just some thoughts...

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Formulaic vs. Equation AGI

2010-06-21 Thread Steve Richfield
One constant in ALL proposed methods leading to computational intelligence
is formulaic operation, where agents, elements, neurons, etc., process
inputs to produce outputs. There is scant biological evidence for this,
and plenty of evidence for a balanced equation operation. Note that
unbalancing one side, e.g. by injecting current, would result in a
responding imbalance on the other side, so that synapses might (erroneously)
appear to be one-way. However, there is plenty of evidence that information
flows both ways, e.g. retrograde flow of information to support learning.

Even looking at seemingly one-way things like the olfactory nerve, there are
axons going both ways.

No, I don't have any sort of comprehensive balanced-equation theory of
intelligent operation, but I can see the interesting possibility.

Suppose that the key to life is not competition, but rather is fitting into
the world. Perhaps we don't so much observe things as orchestrate them to
our needs. Hence, we and our world are in a gigantic loop, adjusting our
outputs to achieve balancing characteristics in our inputs. Imbalances
precipitate changes in action to achieve balance. The only difference
between us and our world is implementation detail. We do our part, and it
does its part. I'm sure that there are Zen Buddhists out there who would
just LOVE this yin-yang view of things.

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

On Mon, Jun 21, 2010 at 10:06 AM, John G. Rose johnr...@polyplexic.comwrote:


  Solutions for large-scale network stabilities would vary per network
  topology, function, etc..
 
  However, there ARE some universal rules, like the 12db/octave
 requirement.
 

 Really? Do networks such as botnets really care about this? Or does it
 apply?


Anytime negative feedback can become positive feedback because of delays or
phase shifts, this becomes an issue. Many competent EE people fail to see
the phase shifting that many decision processes can introduce, e.g. by
responding as quickly as possible, finite speed makes finite delays and
sharp frequency cutoffs, resulting in instabilities at those frequency
cutoff points because of violation of the 12db/octave rule. Of course, this
ONLY applies in feedback systems and NOT in forward-only systems, except at
the real-world point of feedback, e.g. the bots themselves.

Of course, there is the big question of just what it is that is being
attenuated in the bowels of an intelligent system. Usually, it is
computational delays making sharp frequency-limited attenuation at their
response speeds.

Every gamer is well aware of the oscillations that long ping times can
introduce in people's (and intelligent bot's) behavior. Again, this is
basically the same 12db/octave phenomenon.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Mike Tintner
Steve: For example, based on ability to follow instruction, cats must be REALLY 
stupid. 

Either that or really smart. Who wants to obey some dumb human's instructions?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Ian Parker
Isn't this the argument for GAs running on multicored processors? Now each
organism has one core/fraction of a core. The brain will then evaluate *
fitness* having a fitness criterion.

The fact they can be run efficiently in parallel is one of the advantages of
GAs.

Let us look at this another way, when an intelligent person thinks about a
problem, they will think about it in terms of a set of alternatives. This
could be said to be the start of genetic reasoning. So it does in fact take
place now.

A GA is the simplest parallel system which you can think of for purposes of
illustration. However when we answer *Jeopardy* type questions parallelism
is involved. This becomes clear when we look at how Watson actually
works.http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html
It
works in parallel and then finds the most probable answer.


  - Ian Parker


  - Ian Parker

On 21 June 2010 16:38, Abram Demski abramdem...@gmail.com wrote:

 Steve,

 You didn't mention this, so I guess I will: larger animals do generally
 have larger brains, coming close to a fixed brain/body ratio. Smarter
 animals appear to be the ones with a higher brain/body ratio rather than
 simply a larger brain. This to me suggests that the amount of sensory
 information and muscle coordination necessary is the most important
 determiner of the amount of processing power needed. There could be other
 interpretations, however.

 It's also pretty important to say that brains are expensive to fuel. It's
 probably the case that other animals didn't get as smart as us because the
 additional food they could get per ounce brain was less than the additional
 food needed to support an ounce of brain. Humans were in a situation in
 which it was more. So, I don't think your argument from other animals
 supports your hypothesis terribly well.

 One way around your instability if it exists would be (similar to your
 hemisphere suggestion) split the network into a number of individuals which
 cooperate through very low-bandwidth connections. This would be like an
 organization of humans working together. Hence, multiagent systems would
 have a higher stability limit. However, it is still the case that we hit a
 serious diminishing-returns scenario once we needed to start doing this
 (since the low-bandwidth connections convey so much less info, we need waaay
 more processing power for every IQ point or whatever). And, once these
 organizations got really big, it's quite plausible that they'd have their
 own stability issues.

 --Abram

 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability issues. While not fully understood, the primary challenge appears
 (to me) to be that the various control mechanisms (that includes humans in
 the loop) violate a basic requirement 

Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Ian Parker
My comment is this. The brain in fact takes whatever speed it needs. For
simple processing it takes the full speed. More complex processing does not
require the same speed and so is taken more slowly. This is really an
extension of what DESTIN does spatially.


  - Ian Parker

On 21 June 2010 15:30, deepakjnath deepakjn...@gmail.com wrote:

 The brain does not get the high frame rate signals as the eye itself
 only gives brain images at 24 frames per second. Else u wouldn't be
 able to watch a movie.
 Any comments?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 Really? Do networks such as botnets really care about this? Or does it
 apply?
 
 Anytime negative feedback can become positive feedback because of delays
 or phase shifts, this becomes an issue. Many competent EE people fail to
see
 the phase shifting that many decision processes can introduce, e.g. by
 responding as quickly as possible, finite speed makes finite delays and
sharp
 frequency cutoffs, resulting in instabilities at those frequency cutoff
points
 because of violation of the 12db/octave rule. Of course, this ONLY applies
in
 feedback systems and NOT in forward-only systems, except at the real-world
 point of feedback, e.g. the bots themselves.
 
 Of course, there is the big question of just what it is that is being
 attenuated in the bowels of an intelligent system. Usually, it is
 computational delays making sharp frequency-limited attenuation at their
 response speeds.
 
 Every gamer is well aware of the oscillations that long ping times can
 introduce in people's (and intelligent bot's) behavior. Again, this is
basically
 the same 12db/octave phenomenon.
 

OK, excuse my ignorance on this - a design issue in distributed intelligence
is how to split up things amongst the agents. I see it as a hierarchy of
virtual networks, with the lowest level being the substrate like IP sockets
or something else but most commonly TCP/UDP.

The protocols above that need to break up the work, and the knowledge
distribution, so the 12db/octave phenomenon must apply there too. 

I assume any intelligence processing engine must include a harmonic
mathematical component since ALL things are basically network, especially
intelligence. 

This might be an overly aggressive assumption but it seems from observance
that intelligence/consciousness exhibits some sort of harmonic property, or
levels.

John







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread rob levy
Matt,

I'm not sure I buy that argument for the simple reason that we have massive
cheap processing now and pretty good knowledge of the initial conditions of
life on our planet (if we are going literal here and not EC in the
abstract), but it's definitely a possible answer.  Perhaps not enough people
have attempted to run evolutionary computation experiments at these massive
scales either.

Rob

On Mon, Jun 21, 2010 at 12:59 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 rob levy wrote:
  On a related note, what is everyone's opinion on why evolutionary
 algorithms are such a miserable failure as creative machines, despite
 their successes in narrow optimization problems?

 Lack of computing power. How much computation would you need to simulate
 the 3 billion years of evolution that created human intelligence?


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, June 21, 2010 11:56:53 AM

 *Subject:* Re: [agi] An alternative plan to discover self-organization
 theory

 (I'm a little late in this conversation.  I tried to send this message the
 other day but I had my list membership configured wrong. -Rob)

 -- Forwarded message --
 From: rob levy r.p.l...@gmail.com
 Date: Sun, Jun 20, 2010 at 5:48 PM
 Subject: Re: [agi] An alternative plan to discover self-organization theory
 To: agi@v2.listbox.com


 On a related note, what is everyone's opinion on why evolutionary
 algorithms are such a miserable failure as creative machines, despite
 their successes in narrow optimization problems?

 I don't want to conflate the possibly separable problems of biological
 development and evolution, though they are interrelated.  There are various
 approaches to evolutionary theory such as Lima de Faria's evolution without
 selection ideas and Reid's evolution by natural experiment that suggest
 natural selection is not  all it's cracked up to be, and that the step of
 generating, (mutating, combining, ) is where the more interesting
 stuff happens.  Most of the alternatives to Neodarwinian Synthesis I have
 seen are based in dynamic models of emergence in complex systems. The upshot
 is, you don't get creativity for free, you actually still need to solve a
 problem that is as hard as AGI in order to get creativity for free.

 So, you would need to solve the AGI-hard problem of evolution and
 development of life, in order to then solve AGI itself (reminds me of the
 old SNL sketch: first, get a million dollars...).  Also, my hunch is that
 there is quite a bit of overlap between the solutions to the two problems.

 Rob

 Disclaimer: I'm discussing things above that I'm not and don't claim to be
 an expert in, but from what I have seen so far on this list, that should be
 alright.  AGI is by its nature very multidisciplinary which necessitates
 often being breadth-first, and therefore shallow in some areas.


 On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is 
 *his*proposal:

 He suggested that I construct a simple NN that couldn't work without
 self organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be. Then, armed with that knowledge, refine the genetic
 characteristics and do it again, and iterate until it *efficiently* self
 organizes. This might go on for months, but self-organization theory might
 just emerge from such an effort. I had a bunch of objections to his
 approach, e.g.

 Q.  What if it needs something REALLY strange to work?
 A.  Who better than you to come up with a long list of really strange
 functionality?

 Q.  There are at least hundreds of bits in the genome.
 A.  Try combinations in pseudo-random order, with each bit getting
 asserted in ~half of the tests. If/when you stumble onto a combination that
 sort of works, switch to varying the bits one-at-a-time, and iterate in this
 way until the best combination is found.

 Q.  Where are we if this just burns electricity for a few months and finds
 nothing?
 A.  Print out the best combination, break out the wacky tobacy, and come
 up with even better/crazier parameters to test.

 I have never written a line of genetic programming, but I know that others
 here have. Perhaps you could bring some rationality to this discussion?

 What would be a simple NN that needs self-organization? Maybe a small
 pot of neurons that could only work if they were organized into layers,
 e.g. a simple 64-neuron system that would work as a 4x4x4-layer visual
 recognition system, given the input that I fed it?

 Any thoughts 

Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread David Jones
Rob,

Real evolution had full freedom to evolve. Genetic algorithms usually don't.
If they did, the number of calculations it would have to make to really
simulate evolution on the scale that created us would be so astronomical, it
would not be possible. So, what matt said is absolutely correct. There
probably isn't enough processing power in the world to do what real
evolution did and there probably never will be.

So, lets say that you restrict the genetic algorithm. Well, now it doesn't
have the freedom to find the right solution. You may think you've given it
enough freedom, but most likely you have not. If you do give it enough
freedom, its likely to take all eternity to find a solution to many, if not
most, real life problems.

Dave

On Mon, Jun 21, 2010 at 3:15 PM, rob levy r.p.l...@gmail.com wrote:

 Matt,

 I'm not sure I buy that argument for the simple reason that we have massive
 cheap processing now and pretty good knowledge of the initial conditions of
 life on our planet (if we are going literal here and not EC in the
 abstract), but it's definitely a possible answer.  Perhaps not enough people
 have attempted to run evolutionary computation experiments at these massive
 scales either.

 Rob

 On Mon, Jun 21, 2010 at 12:59 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 rob levy wrote:
  On a related note, what is everyone's opinion on why evolutionary
 algorithms are such a miserable failure as creative machines, despite
 their successes in narrow optimization problems?

 Lack of computing power. How much computation would you need to simulate
 the 3 billion years of evolution that created human intelligence?


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, June 21, 2010 11:56:53 AM

 *Subject:* Re: [agi] An alternative plan to discover self-organization
 theory

 (I'm a little late in this conversation.  I tried to send this message the
 other day but I had my list membership configured wrong. -Rob)

 -- Forwarded message --
 From: rob levy r.p.l...@gmail.com
 Date: Sun, Jun 20, 2010 at 5:48 PM
 Subject: Re: [agi] An alternative plan to discover self-organization
 theory
 To: agi@v2.listbox.com


 On a related note, what is everyone's opinion on why evolutionary
 algorithms are such a miserable failure as creative machines, despite
 their successes in narrow optimization problems?

 I don't want to conflate the possibly separable problems of biological
 development and evolution, though they are interrelated.  There are various
 approaches to evolutionary theory such as Lima de Faria's evolution without
 selection ideas and Reid's evolution by natural experiment that suggest
 natural selection is not  all it's cracked up to be, and that the step of
 generating, (mutating, combining, ) is where the more interesting
 stuff happens.  Most of the alternatives to Neodarwinian Synthesis I have
 seen are based in dynamic models of emergence in complex systems. The upshot
 is, you don't get creativity for free, you actually still need to solve a
 problem that is as hard as AGI in order to get creativity for free.

 So, you would need to solve the AGI-hard problem of evolution and
 development of life, in order to then solve AGI itself (reminds me of the
 old SNL sketch: first, get a million dollars...).  Also, my hunch is that
 there is quite a bit of overlap between the solutions to the two problems.

 Rob

 Disclaimer: I'm discussing things above that I'm not and don't claim to be
 an expert in, but from what I have seen so far on this list, that should be
 alright.  AGI is by its nature very multidisciplinary which necessitates
 often being breadth-first, and therefore shallow in some areas.


 On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is 
 *his*proposal:

 He suggested that I construct a simple NN that couldn't work without
 self organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be. Then, armed with that knowledge, refine the genetic
 characteristics and do it again, and iterate until it *efficiently* self
 organizes. This might go on for months, but self-organization theory might
 just emerge from such an effort. I had a bunch of objections to his
 approach, e.g.

 Q.  What if it needs something REALLY strange to work?
 A.  Who better than you to come up with a long list of really strange
 functionality?

 Q.  There are at least hundreds of bits in the genome.
 A.  Try combinations in pseudo-random order, with each bit getting
 asserted in 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
 That being the case, why don't elephants and other large creatures have 
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

Personally I've always wondered how elephants managed to evolve brains
as large as they currently have. How much intelligence does it take to
sneak up on a leaf? (Granted, intraspecies social interactions seem to
provide at least part of the answer.)

 There are all sorts of network-destroying phenomena that rise from complex 
 networks, e.g. phase shift oscillators there circular analysis paths enforce 
 themselves, computational noise is endlessly analyzed, etc. We know that our 
 own brains are just barely stable, as flashing lights throw some people into 
 epileptic attacks, etc. Perhaps network stability is the intelligence limiter?

Empirically, it isn't.

 Suppose for a moment that theoretically perfect neurons could work in a brain 
 of limitless size, but their imperfections accumulate (or multiply) to 
 destroy network operation when you get enough of them together. Brains have 
 grown larger because neurons have evolved to become more nearly perfect

Actually it's the other way around. Brains compensate for
imperfections (both transient error and permanent failure) in neurons
by using more of them.  Note that, as the number of transistors on a
silicon chip increases, the extent to which our chip designs do the
same thing also increases.

 There are some medium-scale network similes in the world, e.g. the power 
 grid. However, there they have high-level central control and lots of crashes

The power in my neighborhood fails once every few years (and that's
from all causes, including 'the cable guys working up the street put a
JCB through the line', not just network crashes). If you're getting
lots of power failures in your neighborhood, your electricity supply
company is doing something wrong.

 I wonder, does the very-large-scale network problem even have a prospective 
 solution? Is there any sort of existence proof of this?

Yes, our repeated successes in simultaneously improving both the size
and stability of very large scale networks (trade, postage, telegraph,
electricity, road, telephone, Internet) serve as very nice existence
proofs.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Read Fast, Trade Fast

2010-06-21 Thread Mike Tintner
http://www.zerohedge.com/article/fast-reading-computers-are-about-drink-your-trading-milkshake


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Russell,

On Mon, Jun 21, 2010 at 1:29 PM, Russell Wallace
russell.wall...@gmail.comwrote:

 On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
 steve.richfi...@gmail.com wrote:
  That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 Personally I've always wondered how elephants managed to evolve brains
 as large as they currently have. How much intelligence does it take to
 sneak up on a leaf? (Granted, intraspecies social interactions seem to
 provide at least part of the answer.)


I suspect that intra-specie social behavior will expand to utilize all
available intelligence.


  There are all sorts of network-destroying phenomena that rise from
 complex networks, e.g. phase shift oscillators there circular analysis paths
 enforce themselves, computational noise is endlessly analyzed, etc. We know
 that our own brains are just barely stable, as flashing lights throw some
 people into epileptic attacks, etc. Perhaps network stability is the
 intelligence limiter?

 Empirically, it isn't.


I see what you are saying, but I don't think you have made your case...


  Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect

 Actually it's the other way around. Brains compensate for
 imperfections (both transient error and permanent failure) in neurons
 by using more of them.


William Calvin, the author who is most credited with making and spreading
this view, and I had a discussion on his Seattle rooftop, while throwing pea
gravel at a target planter. His assertion was that we utilize many parallel
circuits to achieve accuracy, and mine was that it was something else, e.g.
successive approximation. I pointed out that if one person tossed the pea
gravel by putting it on their open hand and pushing it at a target, and the
other person blocked their arm, that the relationship between how much of
the stroke was truncated and how great the error was would disclose the
method of calculation. The question boils down to the question of whether
the error grows drastically even with small truncation of movement (because
a prototypical throw is used, as might be expected from a parallel
approach), or grows exponentially because error correcting steps have been
lost. We observed apparent exponential growth, much smaller than would be
expected from parallel computation, though no one was keeping score.

In summary, having performed the above experiment, I reject this common
view.

Note that, as the number of transistors on a
 silicon chip increases, the extent to which our chip designs do the
 same thing also increases.


Another pet peeve of mine. They could/should do MUCH more fault tolerance
than they now are. Present puny efforts are completely ignorant of past
developments, e.g. Tandem Nonstop computers.


  There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes

 The power in my neighborhood fails once every few years (and that's
 from all causes, including 'the cable guys working up the street put a
 JCB through the line', not just network crashes). If you're getting
 lots of power failures in your neighborhood, your electricity supply
 company is doing something wrong.


If you look at the failures/bandwidth, it is pretty high. The point is that
the information bandwidth of the power grid is EXTREMELY low, so it
shouldn't fail at all, at least not more than maybe once per century.
However, just like the May 6 problem, it sometimes gets itself into trouble
of its own making. Any overload SHOULD simply result in shutting down some
low-priority load, like the heaters in steel plants, and this usually works
as planned. However, it sometimes fails for VERY complex reasons - so
complex that PhD engineers are unable to put it into words, despite having
millisecond-by-millisecond histories to work from.


  I wonder, does the very-large-scale network problem even have a
 prospective solution? Is there any sort of existence proof of this?

 Yes, our repeated successes in simultaneously improving both the size
 and stability of very large scale networks (trade,


NOT stable at all. Just look at the condition of the world's economy.


 postage, telegraph,
 electricity, road, telephone, Internet)


None of these involve feedback, the fundamental requirement to be a
network rather than a simple tree structure. This despite common misuse of
the term network to cover everything with lots of interconnections.


 serve as very nice existence
 proofs.


I'm still looking.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
On Mon, Jun 21, 2010 at 11:05 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
 Another pet peeve of mine. They could/should do MUCH more fault tolerance 
 than they now are. Present puny efforts are completely ignorant of past 
 developments, e.g. Tandem Nonstop computers.

Or perhaps they just figure once the mean time between failure is on
the order of, say, a year, customers aren't willing to pay much for
further improvement. (Note that things like financial databases which
still have difficulty scaling horizontally, do get more fault
tolerance than an ordinary PC. Note also that they pay a hefty premium
for this, more than you or I would be willing or able to pay.)

 The power in my neighborhood fails once every few years (and that's
 from all causes, including 'the cable guys working up the street put a
 JCB through the line', not just network crashes). If you're getting
 lots of power failures in your neighborhood, your electricity supply
 company is doing something wrong.

 If you look at the failures/bandwidth, it is pretty high.

 So what? Nobody except you cares about that metric.  Anyway, the
phone system is in the same league, and the Internet is a lot closer
to it than it was in the past, and those have vastly higher bandwidth.

 Yes, our repeated successes in simultaneously improving both the size
 and stability of very large scale networks (trade,

 NOT stable at all. Just look at the condition of the world's economy.

Better than it was in the 1930s, despite a lot greater complexity.

 postage, telegraph,
 electricity, road, telephone, Internet)

 None of these involve feedback, the fundamental requirement to be a network 
 rather than a simple tree structure. This despite common misuse of the term 
 network to cover everything with lots of interconnections.

 All of them involve massive amounts of feedback. Unless you're
adopting a private definition of the word feedback, in which case by
your private definition, if it is to be at all consistent, neither
brains nor computers running AI programs will involve feedback either,
so it's immaterial.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

Hmmm, I though that with your EE background, that the 12db/octave would
bring back old sophomore-level course work. OK, so you were sick that day.
I'll try to fill in the blanks here...

On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.comwrote:


  Of course, there is the big question of just what it is that is being
  attenuated in the bowels of an intelligent system. Usually, it is
  computational delays making sharp frequency-limited attenuation at their
  response speeds.
 
  Every gamer is well aware of the oscillations that long ping times can
  introduce in people's (and intelligent bot's) behavior. Again, this is
 basically
  the same 12db/octave phenomenon.
 

 OK, excuse my ignorance on this - a design issue in distributed
 intelligence
 is how to split up things amongst the agents. I see it as a hierarchy of
 virtual networks, with the lowest level being the substrate like IP sockets
 or something else but most commonly TCP/UDP.

 The protocols above that need to break up the work, and the knowledge
 distribution, so the 12db/octave phenomenon must apply there too.


RC low-pass circuits exhibit 6db/octave rolloff and 90 degree phase shifts.
12db/octave corresponds to a 180 degree phase shift. More than 180 degrees
and you are into positive feedback. At 24db/octave, you are at maximum *
positive* feedback, which makes great oscillators.

The 12 db/octave limit applies to entire loops of components, and not to the
individual components. This means that you can put a lot of 1db/octave
components together in a big loop and get into trouble. This is commonly
encountered in complex analog filter circuits that incorporate 2 or more
op-amps in a single feedback loop. Op amps are commonly compensated to
have 6db/octave rolloff. Put 2 of them together and you right at the
precipice of 12db/octave. Add some passive components that have their own
rolloffs, and you are over the edge of stability, and the circuit sits there
and oscillates on its own. The usual cure is to replace one of the op-amps
with an *un*compensated op-amp with ~0db/octave rolloff, until it gets to
its maximum frequency, whereupon it has an astronomical rolloff. However,
that astronomical rolloff works BECAUSE the loop gain at that frequency is
less than 1, so the circuit cannot self-regenerate and oscillate at that
frequency.

Considering the above and the complexity of neural circuits, it would seem
that neural circuits would have to have absolutely flat responses and some
central rolloff mechanism, maybe one of the ~200 different types of neurons,
or alternatively, would have to be able to custom-tailor their responses to
work in concert to roll off at a reasonable rate. A third alternative is
discussed below, where you let them go unstable, and actually utilize the
instability to achieve some incredible results.


 I assume any intelligence processing engine must include a harmonic
 mathematical component


I'm not sure I understand what you are saying here. Perhaps you have
discovered the recipe for the secret sauce?


 since ALL things are basically network, especially
 intelligence.


Most of the things we call networks really just pass information along and
do NOT have feedback mechanisms. Power control is an interesting exception,
but most of those guys are unable to even carry on an intelligent
conversation about the subject. No wonder the power networks have problems.


 This might be an overly aggressive assumption but it seems from observance
 that intelligence/consciousness exhibits some sort of harmonic property, or
 levels.


You apparently grok something about harmonics that I don't (yet) grok.
Please enlighten me.

Are you familiar with regenerative receiver operation where operation is on
the knife-edge of instability, or super-regenerative receiver operation,
wherein an intentionally UNstable circuit is operated to achieve phenomenal
gain and specifically narrow bandwidth? These were common designs back in
the early vacuum tube era, when active components cost a day's wages. Given
all of the observed frequency components coming from neural circuits,
perhaps neurons do something similar to actually USE instability to their
benefit?! Is this related to your harmonic thoughts?

Thanks.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Mark Nuzzolilo
My view on the efficiency of the brain's learning has to do with low latency
communications in general, which is a similar concept to high frame rates,
but not limited to the visual senses.  Low latency produces rapid feedback.
Rapid feedback produces rapid adaptation, and the reduced weight of the time
axis in the formula results in a reduction of short-term memory loss and
thus more resources for the brain to work with.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Mark Nuzzolilo
That should be a reduction of the penalty caused from short-term memory
loss.

On Mon, Jun 21, 2010 at 4:20 PM, Mark Nuzzolilo nuzz...@gmail.com wrote:

 My view on the efficiency of the brain's learning has to do with low
 latency communications in general, which is a similar concept to high frame
 rates, but not limited to the visual senses.  Low latency produces rapid
 feedback.  Rapid feedback produces rapid adaptation, and the reduced
 weight of the time axis in the formula results in a reduction of short-term
 memory loss and thus more resources for the brain to work with.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] High Frame Rates Reduce Uncertainty

2010-06-21 Thread Michael Swan
Hi,
* AGI should be scalable - More data just mean the potential for more
accurate results.
* More data can chew up more computation time without a benefit. ie If
all you want to do is identify a bird, it's still a bird at 1 fps and
1000 fps.
* Don't aim for precision, aim for generality. Eg. AGI KNOWS 1000
objects. If you test to see if your object is a bird, and it is not, you
still have 999 possible objects. If you test if it is an animal, you can
split your search space in half - you've reduce the possibilities to
500.  Successive generalisation produce accuracy, sometimes referred as
a hierarchical approach.

On Fri, 2010-06-18 at 14:19 -0400, David Jones wrote:
 I just came up with an awesome idea. I just realized that the brain
 takes advantage of high frame rates to reduce uncertainty when it is
 estimating motion. The slower the frame rate, the more uncertainty
 there is because objects may have traveled too far between images to
 match with high certainty using simple techniques. 
 
 So, this made me think, what if the secret to the brain's ability to
 learn generally stems from this high frame rate trick. What if we made
 a system that could process even high frame rates than the brain can.
 By doing this you can reduce the uncertainty of matches very very low
 (well in my theory so far). If you can do that, then you can learn
 about the objects in a video, how they move together or separately
 with very high certainty. 
 
 You see, matching is the main barrier when learning about objects. But
 with a very high frame rate, we can use a fast algorithm and could
 potentially reduce the uncertainty to almost nothing. Once we learn
 about objects, matching gets easier because now we have training data
 and experience to take advantage of. 
 
 In addition, you can also gain knowledge about lighting, color
 variation, noise, etc. With that knowledge, you can then automatically
 create a model of the object with extremely high confidence. You will
 also be able to determine the effects of light and noise on the
 object's appearance, which will help match the object invariantly in
 the future. It allows you to determine what is expected and unexpected
 for the object's appearance with much higher confidence. 
 
 Pretty cool idea huh?
 
 Dave
 agi | Archives  | Modify Your
 Subscription
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 John,
 
 Hmmm, I though that with your EE background, that the 12db/octave would
 bring back old sophomore-level course work. OK, so you were sick that day.
 I'll try to fill in the blanks here...

Thanks man. Appreciate it.  What little EE training I did undergo was brief
and painful :)

 On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 
  Of course, there is the big question of just what it is that is being
  attenuated in the bowels of an intelligent system. Usually, it is
  computational delays making sharp frequency-limited attenuation at their
  response speeds.
 
  Every gamer is well aware of the oscillations that long ping times can
  introduce in people's (and intelligent bot's) behavior. Again, this is
 basically
  the same 12db/octave phenomenon.
 
 OK, excuse my ignorance on this - a design issue in distributed
intelligence
 is how to split up things amongst the agents. I see it as a hierarchy of
 virtual networks, with the lowest level being the substrate like IP
sockets
 or something else but most commonly TCP/UDP.
 
 The protocols above that need to break up the work, and the knowledge
 distribution, so the 12db/octave phenomenon must apply there too.
 
 RC low-pass circuits exhibit 6db/octave rolloff and 90 degree phase
shifts.
 12db/octave corresponds to a 180 degree phase shift. More than 180
 degrees and you are into positive feedback. At 24db/octave, you are at
 maximum positive feedback, which makes great oscillators.
 
 The 12 db/octave limit applies to entire loops of components, and not to
the
 individual components. This means that you can put a lot of 1db/octave
 components together in a big loop and get into trouble. This is commonly
 encountered in complex analog filter circuits that incorporate 2 or more
op-
 amps in a single feedback loop. Op amps are commonly compensated to
 have 6db/octave rolloff. Put 2 of them together and you right at the
precipice
 of 12db/octave. Add some passive components that have their own rolloffs,
 and you are over the edge of stability, and the circuit sits there and
oscillates
 on its own. The usual cure is to replace one of the op-amps with an
 uncompensated op-amp with ~0db/octave rolloff, until it gets to its
 maximum frequency, whereupon it has an astronomical rolloff. However,
 that astronomical rolloff works BECAUSE the loop gain at that frequency is
 less than 1, so the circuit cannot self-regenerate and oscillate at that
 frequency.
 
 Considering the above and the complexity of neural circuits, it would seem
 that neural circuits would have to have absolutely flat responses and some
 central rolloff mechanism, maybe one of the ~200 different types of
 neurons, or alternatively, would have to be able to custom-tailor their
 responses to work in concert to roll off at a reasonable rate. A third
 alternative is discussed below, where you let them go unstable, and
actually
 utilize the instability to achieve some incredible results.
 
 I assume any intelligence processing engine must include a harmonic
 mathematical component
 
 I'm not sure I understand what you are saying here. Perhaps you have
 discovered the recipe for the secret sauce?

Uhm, no I was merely asking your opinion if the 12db/octave phenomena
applies to a non-EE based intelligence system. If it could be lifted off of
its EE nativeness and applied to ANY network since there are latencies in
ALL networks.  BUT it sounds as if it is heavily analog circuit based,
though there may be some *analogue in an informational network. And this
would be represented under a different technical name or formula most
likely.

 
 since ALL things are basically network, especially
 intelligence.
 
 Most of the things we call networks really just pass information along
and
 do NOT have feedback mechanisms. Power control is an interesting
 exception, but most of those guys are unable to even carry on an
intelligent
 conversation about the subject. No wonder the power networks have
 problems.

Steve - I actually did work in nuclear power engineering many years ago and
remember the Neanderthals involved in that situation believe it or not. But
I will say they strongly emphasized practicality and safety verses
theoretics and academics. And especially trial and error was something to be
frowned upon ... for obvious reasons. IOW, do not rock the boat since there
are real reasons for them being that way!

 
 This might be an overly aggressive assumption but it seems from observance
 that intelligence/consciousness exhibits some sort of harmonic property,
or
 levels.
 
 You apparently grok something about harmonics that I don't (yet) grok.
 Please enlighten me.
 

I was wondering if YOU could envision a harmonic correlation between certain
electrical circuit phenomenon and intelligence. I've just suspected that
there are harmonic properties in intelligence/consciousness. IOW there