One tangential comment.
You're still thinking linearly. Machines are linear chains of parts.
Cause-and-effect thinking made flesh/metal.
With organisms, however you have whole webs of parts acting more or less
simultaneously.
We will probably need to bring that organic thinking/framework - field vs chain
thinking? - into the design of AGI machines, robots.
In relation to your subject, you see, incoming information is actually analysed
by the human system on multiple levels and in terms often of multiple domain
associations simultaneously.
And that's why we often get "confused" - and don't always "not understand."
Sometimes we do know clearly what we don't understand - "what does that word
[actually] mean?" But sometimes we attend to a complex argument and we know it
doesn't really make sense to us, but we don't know which part[s] of it don't
make sense or why - and we have to patiently and gradually unravel that knot of
confusion.
From: Steve Richfield
Sent: Monday, July 12, 2010 7:02 AM
To: agi
Subject: [agi] Mechanical Analogy for Neural Operation!
Everyone has heard about the water analogy for electrical operation. I have a
mechanical analogy for neural operation that just might be "solid" enough to
compute at least some characteristics optimally.
No, I am NOT proposing building mechanical contraptions, just using the concept
to compute neuronal characteristics (or AGI formulas for learning).
Suppose neurons were mechanical contraptions, that receive inputs and
communicate outputs via mechanical movements. If one or more of the neurons
connected to an output of a neuron, can't make sense of a given input given its
other inputs, then its mechanism would physically resist the several inputs
that didn't make mutual sense because its mechanism would jam, with the
resistance possibly coming from some downstream neuron.
This would utilize position to resolve opposing forces, e.g. one "force" being
the observed inputs, and the other "force" being that they don't make sense,
suggest some painful outcome, etc. In short, this would enforce the sort of
equation over the present formulaic view of neurons (and AGI coding) that I
have suggested in past postings may be present, and show that the math may not
be all that challenging.
Uncertainty would be expressed in stiffness/flexibility, computed limitations
would be handled with over-running clutches, etc.
Propagation of forces would come close (perfect?) to being able to identify
just where in a complex network something should change to learn as efficiently
as possible.
Once the force concentrates at some point, it then "gives", something slips or
bends, to unjam the mechanism. Thus, learning is effected.
Note that this suggests little difference between forward propagation and
backwards propagation, though real-world wet design considerations would
clearly prefer fast mechanisms for forward propagation, and compact mechanisms
for backwards propagation.
Epiphany or mania?
Any thoughts?
Steve
agi | Archives | Modify Your Subscription
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com