Isn't this the argument for GAs running on multicored processors? Now each
organism has one core/fraction of a core. The "brain" will then evaluate "*
fitness*" having a fitness criterion.

The fact they can be run efficiently in parallel is one of the advantages of
GAs.

Let us look at this another way, when an intelligent person thinks about a
problem, they will think about it in terms of a set of alternatives. This
could be said to be the start of genetic reasoning. So it does in fact take
place now.

A GA is the simplest parallel system which you can think of for purposes of
illustration. However when we answer "*Jeopardy*" type questions parallelism
is involved. This becomes clear when we look at how Watson actually
works.<http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html>
It
works in parallel and then finds the most probable answer.


  - Ian Parker


  - Ian Parker

On 21 June 2010 16:38, Abram Demski <abramdem...@gmail.com> wrote:

> Steve,
>
> You didn't mention this, so I guess I will: larger animals do generally
> have larger brains, coming close to a fixed brain/body ratio. Smarter
> animals appear to be the ones with a higher brain/body ratio rather than
> simply a larger brain. This to me suggests that the amount of sensory
> information and muscle coordination necessary is the most important
> determiner of the amount of processing power needed. There could be other
> interpretations, however.
>
> It's also pretty important to say that brains are expensive to fuel. It's
> probably the case that other animals didn't get as smart as us because the
> additional food they could get per ounce brain was less than the additional
> food needed to support an ounce of brain. Humans were in a situation in
> which it was more. So, I don't think your argument from other animals
> supports your hypothesis terribly well.
>
> One way around your instability if it exists would be (similar to your
> hemisphere suggestion) split the network into a number of individuals which
> cooperate through very low-bandwidth connections. This would be like an
> organization of humans working together. Hence, multiagent systems would
> have a higher stability limit. However, it is still the case that we hit a
> serious diminishing-returns scenario once we needed to start doing this
> (since the low-bandwidth connections convey so much less info, we need waaay
> more processing power for every IQ point or whatever). And, once these
> organizations got really big, it's quite plausible that they'd have their
> own stability issues.
>
> --Abram
>
> On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield <
> steve.richfi...@gmail.com> wrote:
>
>> There has been an ongoing presumption that more "brain" (or computer)
>> means more intelligence. I would like to question that underlying
>> presumption.
>>
>> That being the case, why don't elephants and other large creatures have
>> really gigantic brains? This seems to be SUCH an obvious evolutionary step.
>>
>> There are all sorts of network-destroying phenomena that rise from complex
>> networks, e.g. phase shift oscillators there circular analysis paths enforce
>> themselves, computational noise is endlessly analyzed, etc. We know that our
>> own brains are just barely stable, as flashing lights throw some people into
>> epileptic attacks, etc. Perhaps network stability is the intelligence
>> limiter? If so, then we aren't going to get anywhere without first fully
>> understanding it.
>>
>> Suppose for a moment that theoretically perfect neurons could work in a
>> brain of limitless size, but their imperfections accumulate (or multiply) to
>> destroy network operation when you get enough of them together. Brains have
>> grown larger because neurons have evolved to become more nearly perfect,
>> without having yet (or ever) reaching perfection. Hence, evolution may have
>> struck a "balance", where less intelligence directly impairs survivability,
>> and greater intelligence impairs network stability, and hence indirectly
>> impairs survivability.
>>
>> If the above is indeed the case, then AGI and related efforts don't stand
>> a snowball's chance in hell of ever outperforming humans, UNTIL the
>> underlying network stability theory is well enough understood to perform
>> perfectly to digital precision. This wouldn't necessarily have to address
>> all aspects of intelligence, but would at minimum have to address
>> large-scale network stability.
>>
>> One possibility is chopping large networks into pieces, e.g. the
>> hemispheres of our own brains. However, like multi-core CPUs, there is work
>> for only so many CPUs/hemispheres.
>>
>> There are some medium-scale network similes in the world, e.g. the power
>> grid. However, there they have high-level central control and lots of
>> crashes, so there may not be much to learn from them.
>>
>> Note in passing that I am working with some non-AGIers on power grid
>> stability issues. While not fully understood, the primary challenge appears
>> (to me) to be that the various control mechanisms (that includes humans in
>> the loop) violate a basic requirement for feedback stability, namely, that
>> the frequency response not drop off faster then 12db/octave at any
>> frequency. Present control systems make binary all-or-nothing decisions that
>> produce astronomical high-frequency components (edges and glitches) related
>> to much lower-frequency phenomena (like overall demand). Other systems then
>> attempt to deal with these edges and glitches, with predictable poor
>> results. Like the stock market crash of May 6, there is a list of dates of
>> major outages and near-outages, where the failures are poorly understood. In
>> some cases, the lights stayed on, but for a few seconds came ever SO close
>> to a widespread outage that dozens of articles were written about them, with
>> apparently no one understanding things even to the basic level that I am
>> explaining here.
>>
>> Hence, a single theoretical insight might guide both power grid
>> development and AGI development. For example, perhaps there is a necessary
>> capability of components in large networks, to be able to custom tailor
>> their frequency response curves to not participate on unstable operation?
>>
>> I wonder, does the very-large-scale network problem even have a
>> prospective solution? Is there any sort of existence proof of this?
>>
>> My underlying thought here is that we may all be working on the wrong
>> problems. Instead of working on the particular analysis methods (AGI) or
>> self-organization theory (NN), perhaps if someone found a solution to
>> large-network stability, then THAT would show everyone the ways to their
>> respective goals.
>>
>> Does anyone here know of a good starting point to understanding
>> large-scale network stability?
>>
>> Any thoughts?
>>
>> Steve
>>
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Abram Demski
> http://lo-tho.blogspot.com/
> http://groups.google.com/group/one-logic
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to