Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Abram Demski
Steve,

You didn't mention this, so I guess I will: larger animals do generally have
larger brains, coming close to a fixed brain/body ratio. Smarter animals
appear to be the ones with a higher brain/body ratio rather than simply a
larger brain. This to me suggests that the amount of sensory information and
muscle coordination necessary is the most important determiner of the amount
of processing power needed. There could be other interpretations, however.

It's also pretty important to say that brains are expensive to fuel. It's
probably the case that other animals didn't get as smart as us because the
additional food they could get per ounce brain was less than the additional
food needed to support an ounce of brain. Humans were in a situation in
which it was more. So, I don't think your argument from other animals
supports your hypothesis terribly well.

One way around your instability if it exists would be (similar to your
hemisphere suggestion) split the network into a number of individuals which
cooperate through very low-bandwidth connections. This would be like an
organization of humans working together. Hence, multiagent systems would
have a higher stability limit. However, it is still the case that we hit a
serious diminishing-returns scenario once we needed to start doing this
(since the low-bandwidth connections convey so much less info, we need waaay
more processing power for every IQ point or whatever). And, once these
organizations got really big, it's quite plausible that they'd have their
own stability issues.

--Abram

On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield steve.richfi...@gmail.com
 wrote:

 There has been an ongoing presumption that more brain (or computer) means
 more intelligence. I would like to question that underlying presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand a
 snowball's chance in hell of ever outperforming humans, UNTIL the underlying
 network stability theory is well enough understood to perform perfectly to
 digital precision. This wouldn't necessarily have to address all aspects of
 intelligence, but would at minimum have to address large-scale network
 stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability issues. While not fully understood, the primary challenge appears
 (to me) to be that the various control mechanisms (that includes humans in
 the loop) violate a basic requirement for feedback stability, namely, that
 the frequency response not drop off faster then 12db/octave at any
 frequency. Present control systems make binary all-or-nothing decisions that
 produce astronomical high-frequency components (edges and glitches) related
 to much lower-frequency phenomena (like overall demand). Other systems then
 attempt to deal with these edges and glitches, with predictable poor
 results. Like the stock market crash of May 6, there is a list of dates of
 major outages and near-outages, where the failures are poorly understood. In
 some cases, the lights stayed on, but for a few seconds came ever SO close
 to a widespread outage that dozens of articles were written about them, with
 apparently no one understanding things even to the basic level that I am
 explaining here.

 Hence, a single theoretical insight might guide both power grid development
 and AGI development. For example, perhaps there is a necessary capability of
 components in large 

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 My underlying thought here is that we may all be working on the wrong
 problems. Instead of working on the particular analysis methods (AGI) or
 self-organization theory (NN), perhaps if someone found a solution to
large-
 network stability, then THAT would show everyone the ways to their
 respective goals.
 

For a distributed AGI this is a fundamental problem. Difference is that a
power grid is such a fixed network. A distributed AGI need not be that
fixed, it could lose chunks of itself but grow them out somewhere else.
Though a distributed AGI could be required to run as a fixed network. 

Some traditional telecommunications networks are power grid like. They have
a drastic amount of stability and healing functions built-in as have been
added over time. 

Solutions for large-scale network stabilities would vary per network
topology, function, etc.. Virtual networks play a large part, this would be
related to the network's ability to reconstruct itself meaning knowing how
to heal, reroute, optimize and grow.. 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Abram,

On Mon, Jun 21, 2010 at 8:38 AM, Abram Demski abramdem...@gmail.com wrote:

 Steve,

 You didn't mention this, so I guess I will: larger animals do generally
 have larger brains, coming close to a fixed brain/body ratio. Smarter
 animals appear to be the ones with a higher brain/body ratio rather than
 simply a larger brain. This to me suggests that the amount of sensory
 information and muscle coordination necessary is the most important
 determiner of the amount of processing power needed. There could be other
 interpretations, however.


It is REALLY hard to compare the intelligence of various animals, because of
their innate behavior being overlaid. For example, based on ability to
follow instruction, cats must be REALLY stupid.


 It's also pretty important to say that brains are expensive to fuel. It's
 probably the case that other animals didn't get as smart as us because the
 additional food they could get per ounce brain was less than the additional
 food needed to support an ounce of brain. Humans were in a situation in
 which it was more. So, I don't think your argument from other animals
 supports your hypothesis terribly well.


Presuming for a moment that you are right, then there will be no
singularity! No, this is NOT a reductio ad absurdum proof either way. Why
no singularity?

If there really is a limit to the value of intelligence, then why should we
think that there will be anything special about super-intelligence? Perhaps
we have been deluding ourselves because we want to think that the reason we
aren't all rich is because we just aren't smart enough, when in reality some
entirely different phenomenon may be key? Have YOU observed that success in
life is highly correlated to intelligence?


 One way around your instability if it exists would be (similar to your
 hemisphere suggestion) split the network into a number of individuals which
 cooperate through very low-bandwidth connections.


While helping breadth of analysis, this would seem to absolutely limit
analysis depth to that of one individual.

This would be like an organization of humans working together. Hence,
 multiagent systems would have a higher stability limit.


Providing they don't get into a war of some sort.


 However, it is still the case that we hit a serious diminishing-returns
 scenario once we needed to start doing this (since the low-bandwidth
 connections convey so much less info, we need waaay more processing power
 for every IQ point or whatever).


I see more problems with analysis depth than with bandwidth limitations.


 And, once these organizations got really big, it's quite plausible that
 they'd have their own stability issues.


Yes.

Steve


 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Jim Bromer
I think a real world solution to grid stability would require greater use of
sensory devices (and a some sensory-feedback devices). I really don't know
for sure, but my assumption is that electrical grid management has relied
mostly on the electrical reactions of the grid itself, and here you are
saying that is just not good enough for critical fluctuations in 2010.  So
while software is also necessary of course, the first change in how grid
management should be done is through greater reliance on off-the-grid (or at
minimal backup on-grid) sensory devices.  I am quite confident, without
knowing anything about the subject, that that is what needs to be done
because I understand a little about how different groups of people work and
I have seen how sensory devices like gps and lidar have fundamentally
changed AI projects because they allowed time sensitive critical analysis
that was too slow and for contemporary AI to solve.  100 years from now,
electrical grid management won't require another layer of sensors because
the software analysis of grid fluctuations will be sufficient. On the other
hand, grid managers will not remove these additional layers of sensors from
the grid a hundred years from now anymore than we telephone engineers would
suggest that maybe they should stop using fiber optics because they could
get back to 1990 fiber optic capacity and reliability using copper wire with
today's switching and software devices.
Jim Bromer
On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield steve.richfi...@gmail.com
 wrote:

 There has been an ongoing presumption that more brain (or computer) means
 more intelligence. I would like to question that underlying presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand a
 snowball's chance in hell of ever outperforming humans, UNTIL the underlying
 network stability theory is well enough understood to perform perfectly to
 digital precision. This wouldn't necessarily have to address all aspects of
 intelligence, but would at minimum have to address large-scale network
 stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability issues. While not fully understood, the primary challenge appears
 (to me) to be that the various control mechanisms (that includes humans in
 the loop) violate a basic requirement for feedback stability, namely, that
 the frequency response not drop off faster then 12db/octave at any
 frequency. Present control systems make binary all-or-nothing decisions that
 produce astronomical high-frequency components (edges and glitches) related
 to much lower-frequency phenomena (like overall demand). Other systems then
 attempt to deal with these edges and glitches, with predictable poor
 results. Like the stock market crash of May 6, there is a list of dates of
 major outages and near-outages, where the failures are poorly understood. In
 some cases, the lights stayed on, but for a few seconds came ever SO close
 to a widespread outage that dozens of articles were written about them, with
 apparently no one understanding things even to the basic level that I am
 explaining here.

 Hence, a single theoretical insight might guide both power grid development
 and AGI development. For example, perhaps there is a necessary capability of
 components in large networks, to be able to custom tailor their frequency
 response curves to not participate on unstable 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

Your comments appear to be addressing reliability, rather than stability...

On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
  My underlying thought here is that we may all be working on the wrong
  problems. Instead of working on the particular analysis methods (AGI) or
  self-organization theory (NN), perhaps if someone found a solution to
 large-
  network stability, then THAT would show everyone the ways to their
  respective goals.
 

 For a distributed AGI this is a fundamental problem. Difference is that a
 power grid is such a fixed network.


Not really. Switches may connect or disconnect Canada, equipment is
constantly failing and being repaired, etc. In any case, this doesn't seem
to be related to stability, other than it being a lot easier to analyze a
fixed network rather than a variable network.


 A distributed AGI need not be that
 fixed, it could lose chunks of itself but grow them out somewhere else.
 Though a distributed AGI could be required to run as a fixed network.

 Some traditional telecommunications networks are power grid like. They have
 a drastic amount of stability and healing functions built-in as have been
 added over time.


However, there is no feedback, so stability isn't even a potential issue.


 Solutions for large-scale network stabilities would vary per network
 topology, function, etc..


However, there ARE some universal rules, like the 12db/octave requirement.


 Virtual networks play a large part, this would be
 related to the network's ability to reconstruct itself meaning knowing how
 to heal, reroute, optimize and grow..


Again, this doesn't seem to relate to millisecond-by-millisecond stability.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Jim,

Yours is the prevailing view in the industry. However, it doesn't seem to
work. Even given months of time to analyze past failures, they are often
unable to divine rules that would have reliably avoided the problems. In
short, until you adequately understand the system that your sensors are
sensing, all the readings in the world won't help. Further, when a system is
fundamentally unstable, you must have a control system that completely deals
with the instability, or it absolutely will fail. The present system meets
neither of these criteria.

There is another MAJOR issue. Presuming a power control center in the middle
of the U.S., the round-trip time at the speed of light to each coast is
~16ms, or two half-cycles at 60Hz. In control terms, that is an eternity.
Distributed control requires fundamental stability to function reliably.
Times can be improved by having separate control systems for each coast, but
the interface would still have to meet fundamental stability criteria (like
limiting the rates of change), and our long coasts would still require a
full half-cycle of time to respond.

Note that faults must be responded to QUICKLY to save the equipment, and so
cannot be left to central control systems to operate.

So, we end up with the system we now have, that does NOT meet reasonable
stability criteria. Hence, we may forever have occasional outages until the
system is radically re-conceived.

Steve
==
On Mon, Jun 21, 2010 at 9:17 AM, Jim Bromer jimbro...@gmail.com wrote:

 I think a real world solution to grid stability would require greater use
 of sensory devices (and a some sensory-feedback devices). I really don't
 know for sure, but my assumption is that electrical grid management has
 relied mostly on the electrical reactions of the grid itself, and here you
 are saying that is just not good enough for critical fluctuations in 2010.
 So while software is also necessary of course, the first change in how grid
 management should be done is through greater reliance on off-the-grid (or at
 minimal backup on-grid) sensory devices.  I am quite confident, without
 knowing anything about the subject, that that is what needs to be done
 because I understand a little about how different groups of people work and
 I have seen how sensory devices like gps and lidar have fundamentally
 changed AI projects because they allowed time sensitive critical analysis
 that was too slow and for contemporary AI to solve.  100 years from now,
 electrical grid management won't require another layer of sensors because
 the software analysis of grid fluctuations will be sufficient. On the other
 hand, grid managers will not remove these additional layers of sensors from
 the grid a hundred years from now anymore than we telephone engineers would
 suggest that maybe they should stop using fiber optics because they could
 get back to 1990 fiber optic capacity and reliability using copper wire with
 today's switching and software devices.
 Jim Bromer
 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There 

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 John,
 
 Your comments appear to be addressing reliability, rather than
stability...

Both can be very interrelated. It can be an oversimplification to separate
them, or too impractical/theoretical. 

 On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.com
 wrote:
  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
  My underlying thought here is that we may all be working on the wrong
  problems. Instead of working on the particular analysis methods (AGI) or
  self-organization theory (NN), perhaps if someone found a solution to
 large-
  network stability, then THAT would show everyone the ways to their
  respective goals.
 
 For a distributed AGI this is a fundamental problem. Difference is that a
 power grid is such a fixed network.
 
 Not really. Switches may connect or disconnect Canada, equipment is
 constantly failing and being repaired, etc. In any case, this doesn't seem
to be
 related to stability, other than it being a lot easier to analyze a fixed
network
 rather than a variable network.
 

There are a fixed amount of copper wires going into a node. 

The network is usually a hierarchy of networks. Fixed may be more
limiting, sophisticated and kludged rendering it more difficult to deal with
so don't assume.

 A distributed AGI need not be that
 fixed, it could lose chunks of itself but grow them out somewhere else.
 Though a distributed AGI could be required to run as a fixed network.
 
 Some traditional telecommunications networks are power grid like. They
 have
 a drastic amount of stability and healing functions built-in as have been
 added over time.
 
 However, there is no feedback, so stability isn't even a potential issue.

No feedback? Remember some traditional telecommunications networks run over
copper with power, and are analog; there are huge feedback issues of which
many taken care of at a lower signaling level or with external equipment
such as echo-cancellers. Again though, there is a hierarchy and mesh of
various networks here. I've suggested traditional telecommunications since
they are vastly more complex, real-time and many other networks have learned
from it.

 
 Solutions for large-scale network stabilities would vary per network
 topology, function, etc..
 
 However, there ARE some universal rules, like the 12db/octave requirement.
 

Really? Do networks such as botnets really care about this? Or does it
apply?

 Virtual networks play a large part, this would be
 related to the network's ability to reconstruct itself meaning knowing how
 to heal, reroute, optimize and grow..
 
 Again, this doesn't seem to relate to millisecond-by-millisecond
stability.
 

It could be as the virtual network might contain images of the actual
network, as an internal model and use this for changing the network
structure for a more stable one if there were timing issues...

Just some thoughts...

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

On Mon, Jun 21, 2010 at 10:06 AM, John G. Rose johnr...@polyplexic.comwrote:


  Solutions for large-scale network stabilities would vary per network
  topology, function, etc..
 
  However, there ARE some universal rules, like the 12db/octave
 requirement.
 

 Really? Do networks such as botnets really care about this? Or does it
 apply?


Anytime negative feedback can become positive feedback because of delays or
phase shifts, this becomes an issue. Many competent EE people fail to see
the phase shifting that many decision processes can introduce, e.g. by
responding as quickly as possible, finite speed makes finite delays and
sharp frequency cutoffs, resulting in instabilities at those frequency
cutoff points because of violation of the 12db/octave rule. Of course, this
ONLY applies in feedback systems and NOT in forward-only systems, except at
the real-world point of feedback, e.g. the bots themselves.

Of course, there is the big question of just what it is that is being
attenuated in the bowels of an intelligent system. Usually, it is
computational delays making sharp frequency-limited attenuation at their
response speeds.

Every gamer is well aware of the oscillations that long ping times can
introduce in people's (and intelligent bot's) behavior. Again, this is
basically the same 12db/octave phenomenon.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Mike Tintner
Steve: For example, based on ability to follow instruction, cats must be REALLY 
stupid. 

Either that or really smart. Who wants to obey some dumb human's instructions?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Ian Parker
Isn't this the argument for GAs running on multicored processors? Now each
organism has one core/fraction of a core. The brain will then evaluate *
fitness* having a fitness criterion.

The fact they can be run efficiently in parallel is one of the advantages of
GAs.

Let us look at this another way, when an intelligent person thinks about a
problem, they will think about it in terms of a set of alternatives. This
could be said to be the start of genetic reasoning. So it does in fact take
place now.

A GA is the simplest parallel system which you can think of for purposes of
illustration. However when we answer *Jeopardy* type questions parallelism
is involved. This becomes clear when we look at how Watson actually
works.http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html
It
works in parallel and then finds the most probable answer.


  - Ian Parker


  - Ian Parker

On 21 June 2010 16:38, Abram Demski abramdem...@gmail.com wrote:

 Steve,

 You didn't mention this, so I guess I will: larger animals do generally
 have larger brains, coming close to a fixed brain/body ratio. Smarter
 animals appear to be the ones with a higher brain/body ratio rather than
 simply a larger brain. This to me suggests that the amount of sensory
 information and muscle coordination necessary is the most important
 determiner of the amount of processing power needed. There could be other
 interpretations, however.

 It's also pretty important to say that brains are expensive to fuel. It's
 probably the case that other animals didn't get as smart as us because the
 additional food they could get per ounce brain was less than the additional
 food needed to support an ounce of brain. Humans were in a situation in
 which it was more. So, I don't think your argument from other animals
 supports your hypothesis terribly well.

 One way around your instability if it exists would be (similar to your
 hemisphere suggestion) split the network into a number of individuals which
 cooperate through very low-bandwidth connections. This would be like an
 organization of humans working together. Hence, multiagent systems would
 have a higher stability limit. However, it is still the case that we hit a
 serious diminishing-returns scenario once we needed to start doing this
 (since the low-bandwidth connections convey so much less info, we need waaay
 more processing power for every IQ point or whatever). And, once these
 organizations got really big, it's quite plausible that they'd have their
 own stability issues.

 --Abram

 On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 There has been an ongoing presumption that more brain (or computer)
 means more intelligence. I would like to question that underlying
 presumption.

 That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 There are all sorts of network-destroying phenomena that rise from complex
 networks, e.g. phase shift oscillators there circular analysis paths enforce
 themselves, computational noise is endlessly analyzed, etc. We know that our
 own brains are just barely stable, as flashing lights throw some people into
 epileptic attacks, etc. Perhaps network stability is the intelligence
 limiter? If so, then we aren't going to get anywhere without first fully
 understanding it.

 Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect,
 without having yet (or ever) reaching perfection. Hence, evolution may have
 struck a balance, where less intelligence directly impairs survivability,
 and greater intelligence impairs network stability, and hence indirectly
 impairs survivability.

 If the above is indeed the case, then AGI and related efforts don't stand
 a snowball's chance in hell of ever outperforming humans, UNTIL the
 underlying network stability theory is well enough understood to perform
 perfectly to digital precision. This wouldn't necessarily have to address
 all aspects of intelligence, but would at minimum have to address
 large-scale network stability.

 One possibility is chopping large networks into pieces, e.g. the
 hemispheres of our own brains. However, like multi-core CPUs, there is work
 for only so many CPUs/hemispheres.

 There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes, so there may not be much to learn from them.

 Note in passing that I am working with some non-AGIers on power grid
 stability issues. While not fully understood, the primary challenge appears
 (to me) to be that the various control mechanisms (that includes humans in
 the loop) violate a basic requirement 

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 Really? Do networks such as botnets really care about this? Or does it
 apply?
 
 Anytime negative feedback can become positive feedback because of delays
 or phase shifts, this becomes an issue. Many competent EE people fail to
see
 the phase shifting that many decision processes can introduce, e.g. by
 responding as quickly as possible, finite speed makes finite delays and
sharp
 frequency cutoffs, resulting in instabilities at those frequency cutoff
points
 because of violation of the 12db/octave rule. Of course, this ONLY applies
in
 feedback systems and NOT in forward-only systems, except at the real-world
 point of feedback, e.g. the bots themselves.
 
 Of course, there is the big question of just what it is that is being
 attenuated in the bowels of an intelligent system. Usually, it is
 computational delays making sharp frequency-limited attenuation at their
 response speeds.
 
 Every gamer is well aware of the oscillations that long ping times can
 introduce in people's (and intelligent bot's) behavior. Again, this is
basically
 the same 12db/octave phenomenon.
 

OK, excuse my ignorance on this - a design issue in distributed intelligence
is how to split up things amongst the agents. I see it as a hierarchy of
virtual networks, with the lowest level being the substrate like IP sockets
or something else but most commonly TCP/UDP.

The protocols above that need to break up the work, and the knowledge
distribution, so the 12db/octave phenomenon must apply there too. 

I assume any intelligence processing engine must include a harmonic
mathematical component since ALL things are basically network, especially
intelligence. 

This might be an overly aggressive assumption but it seems from observance
that intelligence/consciousness exhibits some sort of harmonic property, or
levels.

John







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
 That being the case, why don't elephants and other large creatures have 
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

Personally I've always wondered how elephants managed to evolve brains
as large as they currently have. How much intelligence does it take to
sneak up on a leaf? (Granted, intraspecies social interactions seem to
provide at least part of the answer.)

 There are all sorts of network-destroying phenomena that rise from complex 
 networks, e.g. phase shift oscillators there circular analysis paths enforce 
 themselves, computational noise is endlessly analyzed, etc. We know that our 
 own brains are just barely stable, as flashing lights throw some people into 
 epileptic attacks, etc. Perhaps network stability is the intelligence limiter?

Empirically, it isn't.

 Suppose for a moment that theoretically perfect neurons could work in a brain 
 of limitless size, but their imperfections accumulate (or multiply) to 
 destroy network operation when you get enough of them together. Brains have 
 grown larger because neurons have evolved to become more nearly perfect

Actually it's the other way around. Brains compensate for
imperfections (both transient error and permanent failure) in neurons
by using more of them.  Note that, as the number of transistors on a
silicon chip increases, the extent to which our chip designs do the
same thing also increases.

 There are some medium-scale network similes in the world, e.g. the power 
 grid. However, there they have high-level central control and lots of crashes

The power in my neighborhood fails once every few years (and that's
from all causes, including 'the cable guys working up the street put a
JCB through the line', not just network crashes). If you're getting
lots of power failures in your neighborhood, your electricity supply
company is doing something wrong.

 I wonder, does the very-large-scale network problem even have a prospective 
 solution? Is there any sort of existence proof of this?

Yes, our repeated successes in simultaneously improving both the size
and stability of very large scale networks (trade, postage, telegraph,
electricity, road, telephone, Internet) serve as very nice existence
proofs.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
Russell,

On Mon, Jun 21, 2010 at 1:29 PM, Russell Wallace
russell.wall...@gmail.comwrote:

 On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
 steve.richfi...@gmail.com wrote:
  That being the case, why don't elephants and other large creatures have
 really gigantic brains? This seems to be SUCH an obvious evolutionary step.

 Personally I've always wondered how elephants managed to evolve brains
 as large as they currently have. How much intelligence does it take to
 sneak up on a leaf? (Granted, intraspecies social interactions seem to
 provide at least part of the answer.)


I suspect that intra-specie social behavior will expand to utilize all
available intelligence.


  There are all sorts of network-destroying phenomena that rise from
 complex networks, e.g. phase shift oscillators there circular analysis paths
 enforce themselves, computational noise is endlessly analyzed, etc. We know
 that our own brains are just barely stable, as flashing lights throw some
 people into epileptic attacks, etc. Perhaps network stability is the
 intelligence limiter?

 Empirically, it isn't.


I see what you are saying, but I don't think you have made your case...


  Suppose for a moment that theoretically perfect neurons could work in a
 brain of limitless size, but their imperfections accumulate (or multiply) to
 destroy network operation when you get enough of them together. Brains have
 grown larger because neurons have evolved to become more nearly perfect

 Actually it's the other way around. Brains compensate for
 imperfections (both transient error and permanent failure) in neurons
 by using more of them.


William Calvin, the author who is most credited with making and spreading
this view, and I had a discussion on his Seattle rooftop, while throwing pea
gravel at a target planter. His assertion was that we utilize many parallel
circuits to achieve accuracy, and mine was that it was something else, e.g.
successive approximation. I pointed out that if one person tossed the pea
gravel by putting it on their open hand and pushing it at a target, and the
other person blocked their arm, that the relationship between how much of
the stroke was truncated and how great the error was would disclose the
method of calculation. The question boils down to the question of whether
the error grows drastically even with small truncation of movement (because
a prototypical throw is used, as might be expected from a parallel
approach), or grows exponentially because error correcting steps have been
lost. We observed apparent exponential growth, much smaller than would be
expected from parallel computation, though no one was keeping score.

In summary, having performed the above experiment, I reject this common
view.

Note that, as the number of transistors on a
 silicon chip increases, the extent to which our chip designs do the
 same thing also increases.


Another pet peeve of mine. They could/should do MUCH more fault tolerance
than they now are. Present puny efforts are completely ignorant of past
developments, e.g. Tandem Nonstop computers.


  There are some medium-scale network similes in the world, e.g. the power
 grid. However, there they have high-level central control and lots of
 crashes

 The power in my neighborhood fails once every few years (and that's
 from all causes, including 'the cable guys working up the street put a
 JCB through the line', not just network crashes). If you're getting
 lots of power failures in your neighborhood, your electricity supply
 company is doing something wrong.


If you look at the failures/bandwidth, it is pretty high. The point is that
the information bandwidth of the power grid is EXTREMELY low, so it
shouldn't fail at all, at least not more than maybe once per century.
However, just like the May 6 problem, it sometimes gets itself into trouble
of its own making. Any overload SHOULD simply result in shutting down some
low-priority load, like the heaters in steel plants, and this usually works
as planned. However, it sometimes fails for VERY complex reasons - so
complex that PhD engineers are unable to put it into words, despite having
millisecond-by-millisecond histories to work from.


  I wonder, does the very-large-scale network problem even have a
 prospective solution? Is there any sort of existence proof of this?

 Yes, our repeated successes in simultaneously improving both the size
 and stability of very large scale networks (trade,


NOT stable at all. Just look at the condition of the world's economy.


 postage, telegraph,
 electricity, road, telephone, Internet)


None of these involve feedback, the fundamental requirement to be a
network rather than a simple tree structure. This despite common misuse of
the term network to cover everything with lots of interconnections.


 serve as very nice existence
 proofs.


I'm still looking.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Russell Wallace
On Mon, Jun 21, 2010 at 11:05 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
 Another pet peeve of mine. They could/should do MUCH more fault tolerance 
 than they now are. Present puny efforts are completely ignorant of past 
 developments, e.g. Tandem Nonstop computers.

Or perhaps they just figure once the mean time between failure is on
the order of, say, a year, customers aren't willing to pay much for
further improvement. (Note that things like financial databases which
still have difficulty scaling horizontally, do get more fault
tolerance than an ordinary PC. Note also that they pay a hefty premium
for this, more than you or I would be willing or able to pay.)

 The power in my neighborhood fails once every few years (and that's
 from all causes, including 'the cable guys working up the street put a
 JCB through the line', not just network crashes). If you're getting
 lots of power failures in your neighborhood, your electricity supply
 company is doing something wrong.

 If you look at the failures/bandwidth, it is pretty high.

 So what? Nobody except you cares about that metric.  Anyway, the
phone system is in the same league, and the Internet is a lot closer
to it than it was in the past, and those have vastly higher bandwidth.

 Yes, our repeated successes in simultaneously improving both the size
 and stability of very large scale networks (trade,

 NOT stable at all. Just look at the condition of the world's economy.

Better than it was in the 1930s, despite a lot greater complexity.

 postage, telegraph,
 electricity, road, telephone, Internet)

 None of these involve feedback, the fundamental requirement to be a network 
 rather than a simple tree structure. This despite common misuse of the term 
 network to cover everything with lots of interconnections.

 All of them involve massive amounts of feedback. Unless you're
adopting a private definition of the word feedback, in which case by
your private definition, if it is to be at all consistent, neither
brains nor computers running AI programs will involve feedback either,
so it's immaterial.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread Steve Richfield
John,

Hmmm, I though that with your EE background, that the 12db/octave would
bring back old sophomore-level course work. OK, so you were sick that day.
I'll try to fill in the blanks here...

On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.comwrote:


  Of course, there is the big question of just what it is that is being
  attenuated in the bowels of an intelligent system. Usually, it is
  computational delays making sharp frequency-limited attenuation at their
  response speeds.
 
  Every gamer is well aware of the oscillations that long ping times can
  introduce in people's (and intelligent bot's) behavior. Again, this is
 basically
  the same 12db/octave phenomenon.
 

 OK, excuse my ignorance on this - a design issue in distributed
 intelligence
 is how to split up things amongst the agents. I see it as a hierarchy of
 virtual networks, with the lowest level being the substrate like IP sockets
 or something else but most commonly TCP/UDP.

 The protocols above that need to break up the work, and the knowledge
 distribution, so the 12db/octave phenomenon must apply there too.


RC low-pass circuits exhibit 6db/octave rolloff and 90 degree phase shifts.
12db/octave corresponds to a 180 degree phase shift. More than 180 degrees
and you are into positive feedback. At 24db/octave, you are at maximum *
positive* feedback, which makes great oscillators.

The 12 db/octave limit applies to entire loops of components, and not to the
individual components. This means that you can put a lot of 1db/octave
components together in a big loop and get into trouble. This is commonly
encountered in complex analog filter circuits that incorporate 2 or more
op-amps in a single feedback loop. Op amps are commonly compensated to
have 6db/octave rolloff. Put 2 of them together and you right at the
precipice of 12db/octave. Add some passive components that have their own
rolloffs, and you are over the edge of stability, and the circuit sits there
and oscillates on its own. The usual cure is to replace one of the op-amps
with an *un*compensated op-amp with ~0db/octave rolloff, until it gets to
its maximum frequency, whereupon it has an astronomical rolloff. However,
that astronomical rolloff works BECAUSE the loop gain at that frequency is
less than 1, so the circuit cannot self-regenerate and oscillate at that
frequency.

Considering the above and the complexity of neural circuits, it would seem
that neural circuits would have to have absolutely flat responses and some
central rolloff mechanism, maybe one of the ~200 different types of neurons,
or alternatively, would have to be able to custom-tailor their responses to
work in concert to roll off at a reasonable rate. A third alternative is
discussed below, where you let them go unstable, and actually utilize the
instability to achieve some incredible results.


 I assume any intelligence processing engine must include a harmonic
 mathematical component


I'm not sure I understand what you are saying here. Perhaps you have
discovered the recipe for the secret sauce?


 since ALL things are basically network, especially
 intelligence.


Most of the things we call networks really just pass information along and
do NOT have feedback mechanisms. Power control is an interesting exception,
but most of those guys are unable to even carry on an intelligent
conversation about the subject. No wonder the power networks have problems.


 This might be an overly aggressive assumption but it seems from observance
 that intelligence/consciousness exhibits some sort of harmonic property, or
 levels.


You apparently grok something about harmonics that I don't (yet) grok.
Please enlighten me.

Are you familiar with regenerative receiver operation where operation is on
the knife-edge of instability, or super-regenerative receiver operation,
wherein an intentionally UNstable circuit is operated to achieve phenomenal
gain and specifically narrow bandwidth? These were common designs back in
the early vacuum tube era, when active components cost a day's wages. Given
all of the observed frequency components coming from neural circuits,
perhaps neurons do something similar to actually USE instability to their
benefit?! Is this related to your harmonic thoughts?

Thanks.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 John,
 
 Hmmm, I though that with your EE background, that the 12db/octave would
 bring back old sophomore-level course work. OK, so you were sick that day.
 I'll try to fill in the blanks here...

Thanks man. Appreciate it.  What little EE training I did undergo was brief
and painful :)

 On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 
  Of course, there is the big question of just what it is that is being
  attenuated in the bowels of an intelligent system. Usually, it is
  computational delays making sharp frequency-limited attenuation at their
  response speeds.
 
  Every gamer is well aware of the oscillations that long ping times can
  introduce in people's (and intelligent bot's) behavior. Again, this is
 basically
  the same 12db/octave phenomenon.
 
 OK, excuse my ignorance on this - a design issue in distributed
intelligence
 is how to split up things amongst the agents. I see it as a hierarchy of
 virtual networks, with the lowest level being the substrate like IP
sockets
 or something else but most commonly TCP/UDP.
 
 The protocols above that need to break up the work, and the knowledge
 distribution, so the 12db/octave phenomenon must apply there too.
 
 RC low-pass circuits exhibit 6db/octave rolloff and 90 degree phase
shifts.
 12db/octave corresponds to a 180 degree phase shift. More than 180
 degrees and you are into positive feedback. At 24db/octave, you are at
 maximum positive feedback, which makes great oscillators.
 
 The 12 db/octave limit applies to entire loops of components, and not to
the
 individual components. This means that you can put a lot of 1db/octave
 components together in a big loop and get into trouble. This is commonly
 encountered in complex analog filter circuits that incorporate 2 or more
op-
 amps in a single feedback loop. Op amps are commonly compensated to
 have 6db/octave rolloff. Put 2 of them together and you right at the
precipice
 of 12db/octave. Add some passive components that have their own rolloffs,
 and you are over the edge of stability, and the circuit sits there and
oscillates
 on its own. The usual cure is to replace one of the op-amps with an
 uncompensated op-amp with ~0db/octave rolloff, until it gets to its
 maximum frequency, whereupon it has an astronomical rolloff. However,
 that astronomical rolloff works BECAUSE the loop gain at that frequency is
 less than 1, so the circuit cannot self-regenerate and oscillate at that
 frequency.
 
 Considering the above and the complexity of neural circuits, it would seem
 that neural circuits would have to have absolutely flat responses and some
 central rolloff mechanism, maybe one of the ~200 different types of
 neurons, or alternatively, would have to be able to custom-tailor their
 responses to work in concert to roll off at a reasonable rate. A third
 alternative is discussed below, where you let them go unstable, and
actually
 utilize the instability to achieve some incredible results.
 
 I assume any intelligence processing engine must include a harmonic
 mathematical component
 
 I'm not sure I understand what you are saying here. Perhaps you have
 discovered the recipe for the secret sauce?

Uhm, no I was merely asking your opinion if the 12db/octave phenomena
applies to a non-EE based intelligence system. If it could be lifted off of
its EE nativeness and applied to ANY network since there are latencies in
ALL networks.  BUT it sounds as if it is heavily analog circuit based,
though there may be some *analogue in an informational network. And this
would be represented under a different technical name or formula most
likely.

 
 since ALL things are basically network, especially
 intelligence.
 
 Most of the things we call networks really just pass information along
and
 do NOT have feedback mechanisms. Power control is an interesting
 exception, but most of those guys are unable to even carry on an
intelligent
 conversation about the subject. No wonder the power networks have
 problems.

Steve - I actually did work in nuclear power engineering many years ago and
remember the Neanderthals involved in that situation believe it or not. But
I will say they strongly emphasized practicality and safety verses
theoretics and academics. And especially trial and error was something to be
frowned upon ... for obvious reasons. IOW, do not rock the boat since there
are real reasons for them being that way!

 
 This might be an overly aggressive assumption but it seems from observance
 that intelligence/consciousness exhibits some sort of harmonic property,
or
 levels.
 
 You apparently grok something about harmonics that I don't (yet) grok.
 Please enlighten me.
 

I was wondering if YOU could envision a harmonic correlation between certain
electrical circuit phenomenon and intelligence. I've just suspected that
there are harmonic properties in intelligence/consciousness. IOW there