I assume you meant to post this to the AGI list.

On Tue, Jan 27, 2015 at 5:17 PM, Steve Richfield
<[email protected]> wrote:
> Matt,
>
> Good - we are looking at the detail here...
>
> On Tue, Jan 27, 2015 at 12:29 PM, Matt Mahoney <[email protected]>
> wrote:
>>
>> On Tue, Jan 27, 2015 at 1:46 PM, Steve Richfield via AGI
>> <[email protected]> wrote:
>>
>> > We had a discussion here a couple of years back, to the effect that if
>> > you differentiate signals leading into a NN, and integrate the results from
>> > a NN, that you then get the same as if you didn't do the integration and
>> > differentiation.
>>
>> Almost, but neurons have a nonlinear response.
>
>
> Agreed. The near-reversibility argument was just a foil to show just how
> easy temporal learning is to do
>>
>>
>> > However, "learning" in such a system would then become temporal
>> > learning. Of course, differentiation, which is widely utilized in living
>> > systems, is only possible in CONTINUOUS systems.
>>
>> Neurons are not continuous. The relevant signal is approximated by the
>> spiking rate. But I agree that differentiation is an important
>> component of perception.
>
>
> <<1% of neurons spike - and they are BIG, making them easy enough to find
> and monitor even with the primitive vacuum tubed equipment that was
> initially used. Spiking is apparently the way that the neural equivalent of
> "buss drivers" operate over long distances. Nearly all neurons operate
> continuously. Apparently they are electrically fast, and chemically slow,
> moving ions of various types around to keep statistics on which to adjust
> their functionality.

About 90% of brain cells are non-spiking astrocytes. It is debatable
how much computation they do.

>> >  This would also support bidirectional computing, that I believe is also
>> > a necessary requirement.
>>
>> Reversible computing is only required for a quantum computer, which
>> the brain is not.
>
>
> Do you have ANYTHING to support this statement? Most real world systems,
> even crude mechanical systems, operate "reversibly" in that resistance to
> movement modifies the movement, etc. Predicting the operation of such
> systems is MUCH easier to do in a reversible context. Sure you can simulate
> these digitally, but the computational requirements rise FASTER than
> linearly. This inflected curve now keeps chip designers from simulating new
> complex chip designs.

The brain is obviously not a quantum computer because it can perform
irreversible operations like writing a bit of memory to a synapse.
Furthermore, it does not operate on a superposition of states.

>> > 4.  Move to a programmable analog platform. There ARE ways past the
>> > usual objections to this approach, but no one seems to be interested in
>> > investing the ~$100M to launch in this direction. Done right, this could
>> > also support bidirectional computing.
>>
>> How much did IBM invest in the TrueNorth neuromorphic processor?
>
>
> As I understand that project, they had a particular model of a synapse that
> was FAR simpler than real-life synapses, and that model was NOT programmable
> beyond efficacy, e.g. to gather various statistics on which to base
> adjustments in their operation.

The purpose is to perform neural network type computations. It doesn't
have to be just like the brain. However, it does have a major
shortcoming that synapses are not programmed in parallel, as you
normally would to implement Hebb's rule or back propagation.

>> Analog computing (to the extent possible at the molecular level) might
>> help solve the power efficiency problem.
>
>
> Yes.
>
>>
>> The TrueNorth processor
>> performs 1000 times as many synapse operations per watt as a
>> conventional computer, because it is encoded as a single bit operation
>> rather than 1000 bit operations normally needed for a 32 bit
>> multiply-accumulate. But this is still 100 times more power than
>> required by the brain.
>
>
> Yea, it is hard to beat our brain. Even direct design ala the Harmon Neuron
> would still have to drive a LOT of capacitance as its connections ranged
> widely around a computer.
>
> Note that there are some approaches that aren't clearly digital nor analog.
> My first glimpse at this was the operation of early superheterodyne radios.
> Digital systems can scan analog stored values and periodically restore them
> to the nearest valued step. This introduces some noise but breaks away from
> the usual challenges of long-term analog storage, etc. I suspect success
> would come from some such approach.

The brain uses about 10^-14 J. Computers use about 10^-9 J per
operation. The brain uses about 10^-14 J. Molecular computing with
DNA, RNA, and amino acids uses around 10^-19 J per operation.

The greater efficiency is due to using slower processors. Neurons move
ions, not electrons. Ions are about 10^4 times heavier. Signals
propagate at the speed of sound, about 10^6 times slower than the
speed of light. Computing with molecules is even slower. The speed of
DNA operations like cell replication is measured in microhertz
(weeks).

>> > Your observation that continuous operation is probably necessary is
>> > good, but still not entirely sufficient to get AGI working. Besides
>> > continuous and bidirectional operation, I wonder what ELSE is needed to
>> > close this gap?!!!
>>
>> Just a vast amount of computing power, training data, and programming
>> effort. If it was a simple answer then we would have solved it 50
>> years ago.
>
>
> THAT is the mentality that has stopped progress for the last 50 years!!!
> Continuous operation, bidirectional computation, etc., are NOT the simple
> answers that "a vast amount of computing power, training data, and
> programming
> effort" are. We MUST first understand the problem BEFORE anyone can launch a
> successful effort.
>
> For example, note my recent patent that bases parsing on least frequently
> used words. Parsing text is "obviously" easy for a modern fast digital
> computer to do, yet when you get down into the nuts and bolts of it, it
> brings a modern computer to its knees UNLESS you have such a trick to apply.
> The trick might have been conjured up 40 years ago if ANYONE had ever
> bothered to understand the barrier and looked for a way around it. AGI has
> yet to "man up" to similar challenges. I am STILL seeing postings from
> people who are working on ideas to "understand" NL, but with no such tricks
> to circumvent the computational barrier that awaits them. AGI is VASTLY more
> difficult.
>
> Much of AGI can be likened to the chess playing problem. Every half move
> further that the computer considers multiplies the computational effort by
> ~20X, A full move costs ~400X, 1.5 moves costs 8,000X, which is
> approximately the ratio between vacuum tubes (e.g. an IBM 709) and the
> fastest modern processors. Get that, a modern processor allows chess playing
> programs to look just 1.5 moves further ahead, Of course architecture (like
> Deep Blue) helps, but still the same ratio remains for any given
> architecture.
>
> However, in natural language you have MANY more than 20 choices for each
> subsequent word in text, so absent tricks like mine, the ratio between
> vacuum tube and modern processors is only ~2 more words in the lengths of
> sequences being analyzed.
>
> AGI grows MUCH faster than this, as there is a need to relate almost
> everything with almost everything else. it is no longer 20X, or 100X, but
> more like 10^6X per time step. Here, the ratio between vacuum tubes and
> modern processors would hardly be noticeable. No, there is NO way that more
> processor performance can work our way out of this. Not 1,000 times as much,
> and not 1,000,000 times as much. We need some tricks that we don't now have
> to even make something that is fast enough to play with, let along make
> something that is fast enough to be useful.
>
> You should decide to start working on understanding the problems that no one
> yet understands, so you can shift to finding tricks around them, or quietly
> step back out of the way of others who seek to do this. OF COURSE you can't
> quietly step back out of the way, which is why you are here on this forum,
> which leaves you only one viable option.  B-:D>
>
> Steve
>
>

A simple trick like your patent is not going to solve the problem of
parsing natural language. But I would be happy for you to prove me
wrong.


-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to