Matt,

<<1% of neurons spike - and they are BIG, making them easy enough to find
> > and monitor even with the primitive vacuum tubed equipment that was
> > initially used. Spiking is apparently the way that the neural equivalent
> of
> > "buss drivers" operate over long distances. Nearly all neurons operate
> > continuously. Apparently they are electrically fast, and chemically slow,
> > moving ions of various types around to keep statistics on which to adjust
> > their functionality.
>
> About 90% of brain cells are non-spiking astrocytes. It is debatable
> how much computation they do.
>

True, but that was NOT what I was referring to. The vast majority of
NEURONS don't spike. Apparently, the only ones that spike are those with
long axons that carry their signals far (e.g. some fraction of a
millimeter) away. Most neurons produce gradually changing analog outputs.
Of course, this is then somewhat "digitized" as they send individual
neurotransmitter molecules across synapses.

The "secret sauce" bidirectional computing part of this is that the
neurotransmitter molecules are then altered and sent back - or sometimes
not sent back depending on what the receiving neuron is doing. This easily
supports back-propagation like phenomena. No one now knows how many TYPES
of such neurotransmitter molecules there are, but there may be several
parallel bidirectional channels of communication. These would necessarily
be slower than computations done electrically.

>
> >> >  This would also support bidirectional computing, that I believe is
> also
> >> > a necessary requirement.
> >>
> >> Reversible computing is only required for a quantum computer, which
> >> the brain is not.
> >
> >
> > Do you have ANYTHING to support this statement? Most real world systems,
> > even crude mechanical systems, operate "reversibly" in that resistance to
> > movement modifies the movement, etc. Predicting the operation of such
> > systems is MUCH easier to do in a reversible context. Sure you can
> simulate
> > these digitally, but the computational requirements rise FASTER than
> > linearly. This inflected curve now keeps chip designers from simulating
> new
> > complex chip designs.
>
> The brain is obviously not a quantum computer because it can perform
> irreversible operations like writing a bit of memory to a synapse.
> Furthermore, it does not operate on a superposition of states.
>

Obviously, but operational details aside, it appears that bidirectional
operation is NECESSARY to out-think a reversible and quantum-mechanical
world. If it is necessary, and evidence of it has been seen in neurons, and
present digital computers can NOT do it other than via very slow simulation
methods, then present AGI efforts on present digital computers would appear
to be doomed. Right?

>
> >> > 4.  Move to a programmable analog platform. There ARE ways past the
> >> > usual objections to this approach, but no one seems to be interested
> in
> >> > investing the ~$100M to launch in this direction. Done right, this
> could
> >> > also support bidirectional computing.
> >>
> >> How much did IBM invest in the TrueNorth neuromorphic processor?
> >
> >
> > As I understand that project, they had a particular model of a synapse
> that
> > was FAR simpler than real-life synapses, and that model was NOT
> programmable
> > beyond efficacy, e.g. to gather various statistics on which to base
> > adjustments in their operation.
>
> The purpose is to perform neural network *type* computations. It doesn't
> have to be *just like the brain*. However, it does have a major
> shortcoming that *synapses are not programmed in parallel*, as you
> normally would to implement Hebb's rule or back propagation.
>

 Note my added highlighting in the above paragraph. What good is a synapse
if it can't do the job of being a synapse, part of which is to determine
its own functionality? I never did see any use for such projects - do you
see any good to come from them?

>
> >> Analog computing (to the extent possible at the molecular level) might
> >> help solve the power efficiency problem.
> >
> >
> > Yes.
> >
> >>
> >> The TrueNorth processor
> >> performs 1000 times as many synapse operations per watt as a
> >> conventional computer, because it is encoded as a single bit operation
> >> rather than 1000 bit operations normally needed for a 32 bit
> >> multiply-accumulate. But this is still 100 times more power than
> >> required by the brain.
> >
> >
> > Yea, it is hard to beat our brain. Even direct design ala the Harmon
> Neuron
> > would still have to drive a LOT of capacitance as its connections ranged
> > widely around a computer.
> >
> > Note that there are some approaches that aren't clearly digital nor
> analog.
> > My first glimpse at this was the operation of early superheterodyne
> radios.
> > Digital systems can scan analog stored values and periodically restore
> them
> > to the nearest valued step. This introduces some noise but breaks away
> from
> > the usual challenges of long-term analog storage, etc. I suspect success
> > would come from some such approach.
>
> The brain uses about 10^-14 J. Computers use about 10^-9 J per
> operation. The brain uses about 10^-14 J. Molecular computing with
> DNA, RNA, and amino acids uses around 10^-19 J per operation.
>
> The greater efficiency is due to using slower processors. Neurons move
> ions, not electrons. Ions are about 10^4 times heavier. Signals
> propagate at the speed of sound, about 10^6 times slower than the
> speed of light.


You have misunderstanding here. As coaxial cables and other transmission
lines get smaller and smaller, their velocity factor - the speed at which
they carry signals gets slower. On open wire carries signals at ~0.95C. For
an ordinary coaxial cable like you might have to your TV antenna, the VF is
around 0.7. However, for cables the size of axons, the VF comes ALL the way
down to the speed of sound!!! The very tiny traces on VLSI chips suffer
similar slowdowns but they aren't quite as slow, because their planar
constructions has lower distributed capacitance, and where speed is
critical, designers can make the conductors larger.

Transmissions lines can be looked at as distributed inductors, with
distributed capacitors to the world/ground. This sort of network forms a
phase-linear low pass filter. If you think about it, delaying the phase in
proportion to the frequency is EXACTLY what a delay line does, at least for
individual transitions. Tiny conductors have more inductance, and
dielectrics so thin it takes an electron microscope to even see them make
for HIGH capacitance.

Half a century ago in the hay-day of analog computers, they used
phase-linear low pass filters to delay signals - a complex way of doing
something that is MUCH easier to do digitally.

Computing with molecules is even slower.


I suspect that "molecular computing" is the way that errors are accumulated
so that reprogramming decisions can be made.


> The speed of
> DNA operations like cell replication is measured in microhertz
> (weeks).
>

I have seen suggestions for DNA computing for half a century, especially
for memory, and a few experiments trying to establish some basis for it.
However, to date, there has been NO theoretical OR experimental results
(that I know of) that have supported this. Hence, for now, I am discounting
this prospect.

On a curious aside, at one time this was "proven" when they fed trained
flatworms to other flatworms, who then trained faster than flatworms who
had not eaten trained flatworms. Then, another researcher decided to use
untrained flatworms for the control diet, rather than standard worm food.
The advantage of eating trained flatworms then disappeared, as apparently
worm food isn't nearly as nutritious as ground-up flatworms.

>> > Your observation that continuous operation is probably necessary is
>> > good, but still not entirely sufficient to get AGI working. Besides
>> > continuous and bidirectional operation, I wonder what ELSE is needed to
>> > close this gap?!!!
>>
>> Just a vast amount of computing power, training data, and programming
>> effort. If it was a simple answer then we would have solved it 50
>> years ago.
>
>
> THAT is the mentality that has stopped progress for the last 50 years!!!
> Continuous operation, bidirectional computation, etc., are NOT the simple
> answers that "a vast amount of computing power, training data, and
> programming
> effort" are. We MUST first understand the problem BEFORE anyone can
launch a
> successful effort.
>
> For example, note my recent patent that bases parsing on least frequently
> used words. Parsing text is "obviously" easy for a modern fast digital
> computer to do, yet when you get down into the nuts and bolts of it, it
> brings a modern computer to its knees UNLESS you have such a trick to
apply.
> The trick might have been conjured up 40 years ago if ANYONE had ever
> bothered to understand the barrier and looked for a way around it. AGI has
> yet to "man up" to similar challenges. I am STILL seeing postings from
> people who are working on ideas to "understand" NL, but with no such
tricks
> to circumvent the computational barrier that awaits them. AGI is VASTLY
more
> difficult.
>
> Much of AGI can be likened to the chess playing problem. Every half move
> further that the computer considers multiplies the computational effort by
> ~20X, A full move costs ~400X, 1.5 moves costs 8,000X, which is
> approximately the ratio between vacuum tubes (e.g. an IBM 709) and the
> fastest modern processors. Get that, a modern processor allows chess
playing
> programs to look just 1.5 moves further ahead, Of course architecture
(like
> Deep Blue) helps, but still the same ratio remains for any given
> architecture.
>
> However, in natural language you have MANY more than 20 choices for each
> subsequent word in text, so absent tricks like mine, the ratio between
> vacuum tube and modern processors is only ~2 more words in the lengths of
> sequences being analyzed.
>
> AGI grows MUCH faster than this, as there is a need to relate almost
> everything with almost everything else. it is no longer 20X, or 100X, but
> more like 10^6X per time step. Here, the ratio between vacuum tubes and
> modern processors would hardly be noticeable. No, there is NO way that
more
> processor performance can work our way out of this. Not 1,000 times as
much,
> and not 1,000,000 times as much. We need some tricks that we don't now
have
> to even make something that is fast enough to play with, let along make
> something that is fast enough to be useful.
>
> You should decide to start working on understanding the problems that no
one
> yet understands, so you can shift to finding tricks around them, or
quietly
> step back out of the way of others who seek to do this. OF COURSE you
can't
> quietly step back out of the way, which is why you are here on this forum,
> which leaves you only one viable option.  B-:D>

A simple trick like your patent is not going to solve the problem of
> parsing natural language.


Here the operative word is "A". It will take more tricks, but there wasn't
even a way to experiment with large volumes of text until the speed issue
was solved.

That depends on what you think "the" problem is. Missing words, Winograd's
referent issues, etc. For the intended application - detecting writing
indicating the presence of any of a variety of well-understood problems,
which is where computers REALLY shine, well known parsing methods are
entirely adequate.

BTW, I think I have found an approach to deal with Winograd's referent
issues, but I am not yet ready to put it up on a public forum.

But I would be happy for you to prove me
> wrong.
>

Hey, I agree with you that computers aren't going to understand
unrestrained natural language anytime soon - and probably not much before
AGIs walk among us - if even then. My patent makes that specific point -
there there is NOTHING in my patent that could confer the natural language
abilities of an ordinary 5-year-old.

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to