Steve,

 

I can't compare myself with a good mathematician. Just look at my list of
publications on my web site, there is next to nothing on Mathematics. But
emergence is Physics, and the math I am talking about is the math of the
model I have proposed to explain it. The claim I make is that I knew where
to look, and I knew how to figure out what it was that I had found. 

 

Just as you do, I also suffer very much because of the lack of teamwork. But
the order of things I see is a little different. I see first something, some
idea, well, a principle, that can bring people together around it. Then,
team work will naturally follow. Then, funding will follow, in that order. I
also see a need to improve academic standards. A blog is not going to do
that. I have some hopes in JAGI. For now, JAGI appears to be scholar, but it
will have to overcome powerful established interests to stay scholar. 

 

And I still don't believe we need 200 types of neurons to understand
intelligence. Even for a full implementation on some future supercomputer,
the properties of chips will be very different from the propeties of
neurons, and the implementation will be very different. Now, if the goal is
Neuroscience, then every detail about the brain matters. Fortunately, that's
not what I am doing. 

 

One way to help the three disciplines come together is to do simple things,
like input/output for a piece of a retina of some animal. Something that
everyone to understand. I have been trying to convince Alan on this, but he
thinks one would have to model all details. I believe EI can do it with a
much simpler model, just like pixels in a TV camera. The goal would be to
predict compression in the retina, and see if it agrees with the observed
compression. 

 

Sergio

 

 

 

From: Steve Richfield [mailto:[email protected]] 
Sent: Saturday, June 23, 2012 2:22 PM
To: AGI
Subject: Re: [agi] Prediction Did Not Work (except in narrow ai.)

 

Sergio,

In this and other postings, you are making the same mistake that most others
in AGI make:

You point to the many areas where interesting things are obviously happening
that people don't yet understand, and saying that THERE is where we should
be working, not diagramming or simulating neurosystems, etc. It is not that
you are "wrong", but rather that your view contains an oxymoron.

What took a hundred million years of evolution and 200 different types of
neurons to make work is NOT going to be "dreamed up" by anyone here or
anywhere else. Maybe with another 1,000 years of talented mathematical work,
there might be some light at the end of the tunnel. Obviously we don't want
to wait that long, so we need to find another path.

You are asking questions that are fundamentally mathematical in nature.
These questions would have already been solved by talented mathematicians
(there are lots of them), except that your observations are too vague to
turn observations into problems, then into questions, and then into
solutions.

Any competent mathematician can answer a question.

It takes a really good mathematician to transform a problem into a question.

It is often beyond human capability to transform an observation into a
problem. This has been and will continue to be the show stopper for AGI.
Here, you either need an AGI to design an AGI, or you need more information
than we now have. Having only my artificially enhanced intellect to apply, I
am just pointing out the "obvious", at least obvious to me, that we need
more information. Where would YOU look for more information?

Once we have more information, it will take multidisciplinary cooperation
that doesn't now exist to get over the remaining humps.

More comments follow...

On Sat, Jun 23, 2012 at 9:34 AM, Sergio Pissanetzky <[email protected]>
wrote:

Alan,

my point didn't get through. My only contribution to science is the
inference. I believe it is important, and I feel obligated to popularize it.
To popularize, I need to talk applications, not just pure Math.


Without the math you aren't going anywhere. Of course that math doesn't now
exist. Hence, I don't expect to see AGIs anytime soon, at least not until
the AGI community gets onto a productive pathway.


In NS, that means applications to the brain. I know a bunch of things about
the brain. They are things neuroscientists do not know. And I don't know
many of the things they do. In my mind, this calls for teamwork, not hide in
a hole and shut up.


YES - it takes a combination of wet-lab science, math, and people like you
who look at where this is all going, all working TOGETHER.


Here are the things I know. I know the brain obeys laws of conservation,
just because it is a physical system, and I know that laws of conservation
are associated with symmetries in the physical system. I know the brain
makes invariant representations (the chair upside down is still a chair). I
would like to know if the two things are related.


Good observations and a start at formulating a problem. 


I see causality in the brain. Sensory organs collect causal information (it
is still causal even if rated pulses originate from a cone). Muscles are
driven by causal commands. Neurons firing cause other neurons to fire. There
are exceptions, I know that too.

Causal sets have symmetries, and they obey laws of conservation, which
result in invariant representations. I have collected some experimental
evidence suggesting that these representations are the same that the brain
makes, given the same information. Should I pursue these matters further? Or
should I just ignore the whole thing because, for example , "neurons
sometimes fire at random?"


How could a researcher distinguish "random" from "unknown function". When
you see statements like this, just chalk it up to their ignorance.

Note that there is good evidence that we compute, especially in our visual
systems that have been most studied, with rates of changes to logarithms,
and that logarithmic curves are discontinuous are zero. Hence, even the
slightest of system noise around zero would be seen as apparently random
pulses (from the ~10% of neurons that actually produce any pulses, as most
neurons are continuously analog). Now, try to explain even this simple
concept to a neuroscientist. They are most likely to inquire about what you
have been smoking.

Or because a cone on the retina gives out many
pulses instead of just one?


Maybe that is simply what is needed to work right? Understanding WHY that is
needed to work right is a MUCH harder problem.


The clue here is to pursue the big matters without getting bugged down in
the details.


I think you are saying to look at things top-down rather than bottom-up,
which I agree with. However, at some point the top and bottom must meet
before you know enough to start coding.
 

I am trying to say something useful about the brain that
neuroscientists can understand, without sacrificing the big picture. I feel
free to disregard details when I believe that the big picture is independent
of those details. For example, if a cone produces a string of pulses, not
just one as I proposed, would then the brain not be a physical system? Would
it not obey conservation laws? Would it not make invariant representations?
If I can show that a chair upside down is a chair with one pulse, would that
be necessarily false for 3 pulses?


Until you fully understand the problem that is being solved, you can't make
ANY valid conclusions. You are now trying to think about this when the
answers simply can't be reasoned out in the present lack of understanding of
the PROBLEM. 


I know even more things that concern the brain. I know that EI is not an
algorithm, and can not be implemented as a circuit or network. I am very
concerned about projects to reverse-engineer the brain and simulate it on a
computer using a program. Because they are not even looking at the right
things. They can simulate the entire brain in ultimate detail, with strings
of pulses coming from cones, with all the details of the optical nerves, and
still not find EI!


So, how would you ever debug such a system? No, it is necessary to
UNDERSTAND the vast majority what is happening to ever get a simulation to
actually work.
 

Because it is not there. They ought to be looking at the
dynamics of the neurons, doing simple experiments with brain-on-a-dish or
retinas that compress, and trying to understand how it all works, before
embarking in blind efforts.


As in prior postings, this research is all funded by the Department of
Health, and they don't give a damn about computation - just diseases. In
short, I agree with you and point out that the fundamental underlying
disagreement with the "world" is very political in nature. 


And so also should I. Try to apply EI to simple things, understand what they
do, find the principle, and only then, with the principle in hand, embark in
implementation details.

The question is: do neurons do EI, or not? And if they do, how do they do
it? So how about some team work?


I have been trying to pull this together for decades, but STILL people just
don't "get it". Neuro-scientists just don't see any value in math that they
don't understand, Computer people can't see any value in understanding
wetware when they see their programs working entirely differently, and the
mathematicians hardly know where to start having not even been given "clean"
observations, let alone problems or questions.

This entire area is going nowhere until we get that teamwork you mentioned,
and each of the three areas that need to come together sees the other two
areas as being completely irrelevant. Given my past efforts and failures, I
believe that we are bumping up against a fundamental limitation of the human
brain - the inability to see the value of other views of things. This occurs
in nearly every area of human endeavor, and especially here in AGI.

It seems EVER so obvious to me that the crop of people here aren't going to
be building any AGIs, because they are literally hiding from the very
information they need to succeed.

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
Modify Your Subscription:
https://www.listbox.com/member/? <https://www.listbox.com/member/?&;> &
d2

Powered by Listbox: http://www.listbox.com




AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to