Josh,

I apparently failed to clearly state my central argument. Allow me to try
again in simpler terms:

The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack
of technology or clever people to apply it, but is rather a lack of
understanding of the real world and how to effectively interact within
it. Some clues as to the totality of the difficulties are the ~200 different
types of neurons, and in the 40 years of ineffective AI/AGI research. I have
seen NO recognition of this fundamental issue in other postings on this
forum. This level of difficulty strongly implies that NO clever programming
will ever achieve human-scale (and beyond) intelligence, until some way is
found to "mine" the evolutionary lessons "learned" during the last ~200
million years.

Note that the CENTRAL difficulty in effectively interacting in the real
world is working with and around the creatures that already inhabit it,
which are the product of ~200 million years of evolution. Even a "perfect"
AGI would have to have some very "imperfect" logic to help predict the
actions of our world's present inhabitants. Hence, there seems (to me) that
there is probably no simple solution, as otherwise it would have already
evolved during the last ~200 million years, instead of evolving the highly
complex creatures that we now are.

That having been said, I will comment on your posting...

On 6/4/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
>
> On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote:
>
> > Back to those ~200 different types of neurons. There are probably some
> cute
> > tricks buried down in their operation, and you probably need to figure
> out
> > substantially all ~200 of those tricks to achieve human intelligence. If
> I
> > were an investor, this would sure sound pretty scary to me without SOME
> sort
> > of "insurance" like scanning capability, and maybe some simulations.
>
> I'll bet there are just as many cute tricks to be found in computer
> technology, including software, hardware, fab processes, quantum mechanics
> of
> FETs, etc -- now imagine trying to figure all of them out at once by
> running
> Pentiums thru mazes with a few voltmeters attached. All at once because you
> never know for sure whether some gene expression pathway is crucially
> involved in dendrite growth for learning or is just a kludge against celiac
> disease.


Of course, this has nothing to do with creating the "smarts" to deal with
our very complex real world well enough to compete with us who already
inhabit it.

That's what's facing the neuroscientists, and I wish them well -- but I
> think
> we'll get to the working mind a lot faster studying things at a higher
> level.


I agree that high level views are crucial, but with the present lack of
low-level knowledge, I see no hope for solving all of the problems while
remaining only at a high level.

For example:
>
> http://repositorium.sdum.uminho.pt/bitstream/1822/5920/1/ErlhagenBicho-JNE06.pdf


>From that article: "Our close cooperation with experimenters from
neuroscience and cognitive science has strongly influenced the proposed
architectures for implementing cognitive functions such as goal inference
and decision making." THIS is where efforts are needed - in bringing the
disparate views together rather than "keeping your head in the clouds" with
only a keyboard and screen in front of you.

In the 1980s I realized that neither neuroscience nor AI could proceed to
their manifest destinies until a system of real-world mathematics was
developed that could first predict details of neuronal functionality, and
then hopefully show what AI needed. The "missing link" seemed to be the lack
of knowledge as to just what the units were in the communications between
neurons. Pulling published and unpublished experimental results together,
mostly from Kathryn Graubard's research, I showed (and presented at the
first Int'l NN Conference) that there were more than one such unit, and that
one was clearly the logarithms of the probabilities of assertions being
true. Presuming this leads directly to a mathematics of synapses, that
accurately predicts the strange non-linear and discontinuous transfer
functions observed in inhibitory synapses, etc. It also leads to the optimal
manipulation of synaptic efficacies, etc. However, apparently NO ONE ELSE
saw the value in this. Without the units, there can be no substantial
mathematics, and without the mathematics, there is nothing to guide either
neuroscience, NN, or AI "research". Hence, I remain highly skeptical of
claimed "high level" views.

Steve Richfield



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to