Russell,

On Mon, Jun 21, 2010 at 1:29 PM, Russell Wallace
<russell.wall...@gmail.com>wrote:

> On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
> <steve.richfi...@gmail.com> wrote:
> > That being the case, why don't elephants and other large creatures have
> really gigantic brains? This seems to be SUCH an obvious evolutionary step.
>
> Personally I've always wondered how elephants managed to evolve brains
> as large as they currently have. How much intelligence does it take to
> sneak up on a leaf? (Granted, intraspecies social interactions seem to
> provide at least part of the answer.)
>

I suspect that intra-specie social behavior will expand to utilize all
available intelligence.

>
> > There are all sorts of network-destroying phenomena that rise from
> complex networks, e.g. phase shift oscillators there circular analysis paths
> enforce themselves, computational noise is endlessly analyzed, etc. We know
> that our own brains are just barely stable, as flashing lights throw some
> people into epileptic attacks, etc. Perhaps network stability is the
> intelligence limiter?
>
> Empirically, it isn't.
>

I see what you are saying, but I don't think you have "made your case"...

>
> > Suppose for a moment that theoretically perfect neurons could work in a
> brain of limitless size, but their imperfections accumulate (or multiply) to
> destroy network operation when you get enough of them together. Brains have
> grown larger because neurons have evolved to become more nearly perfect
>
> Actually it's the other way around. Brains compensate for
> imperfections (both transient error and permanent failure) in neurons
> by using more of them.


William Calvin, the author who is most credited with making and spreading
this view, and I had a discussion on his Seattle rooftop, while throwing pea
gravel at a target planter. His assertion was that we utilize many parallel
circuits to achieve accuracy, and mine was that it was something else, e.g.
successive approximation. I pointed out that if one person tossed the pea
gravel by putting it on their open hand and pushing it at a target, and the
other person blocked their arm, that the relationship between how much of
the stroke was truncated and how great the error was would disclose the
method of calculation. The question boils down to the question of whether
the error grows drastically even with small truncation of movement (because
a prototypical throw is used, as might be expected from a parallel
approach), or grows exponentially because error correcting steps have been
lost. We observed apparent exponential growth, much smaller than would be
expected from parallel computation, though no one was keeping score.

In summary, having performed the above experiment, I reject this common
view.

Note that, as the number of transistors on a
> silicon chip increases, the extent to which our chip designs do the
> same thing also increases.
>

Another pet peeve of mine. They could/should do MUCH more fault tolerance
than they now are. Present puny efforts are completely ignorant of past
developments, e.g. Tandem Nonstop computers.

>
> > There are some medium-scale network similes in the world, e.g. the power
> grid. However, there they have high-level central control and lots of
> crashes
>
> The power in my neighborhood fails once every few years (and that's
> from all causes, including 'the cable guys working up the street put a
> JCB through the line', not just network crashes). If you're getting
> lots of power failures in your neighborhood, your electricity supply
> company is doing something wrong.
>

If you look at the failures/bandwidth, it is pretty high. The point is that
the "information bandwidth" of the power grid is EXTREMELY low, so it
shouldn't fail at all, at least not more than maybe once per century.
However, just like the May 6 problem, it sometimes gets itself into trouble
of its own making. Any overload SHOULD simply result in shutting down some
low-priority load, like the heaters in steel plants, and this usually works
as planned. However, it sometimes fails for VERY complex reasons - so
complex that PhD engineers are unable to put it into words, despite having
millisecond-by-millisecond histories to work from.

>
> > I wonder, does the very-large-scale network problem even have a
> prospective solution? Is there any sort of existence proof of this?
>
> Yes, our repeated successes in simultaneously improving both the size
> and stability of very large scale networks (trade,


NOT stable at all. Just look at the condition of the world's economy.


> postage, telegraph,
> electricity, road, telephone, Internet)


None of these involve feedback, the fundamental requirement to be a
"network" rather than a simple tree structure. This despite common misuse of
the term "network" to cover everything with lots of interconnections.


> serve as very nice existence
> proofs.
>

I'm still looking.

Steve



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to