Sergio, On Thu, Jun 28, 2012 at 6:50 AM, Sergio Pissanetzky <[email protected]>wrote:
> I can't compare myself with a good mathematician. Just look at my list of > publications on my web site, there is next to nothing on Mathematics. But > emergence is Physics, and the math I am talking about is the math of the > model I have proposed to explain it. The claim I make is that I knew where > to look, and I knew how to figure out what it was that I had found. > A guy at IBM showed that pretty much ANY function could be performed by running the arguments through appropriate nonlinearities, adding them up, and running the result through an appropriate nonlinearity. To illustrate, performing multiplication by converting to logarithms, adding, and then taking the antilog. It appears that neurons are doing very much the same thing. There tends to be MANY more excitatory synapses than inhibitory synapses, so much of the nonlinear complexity predictably finds its way to the inhibitory synapses, or at least that is the theory. The last time I looked, the transfer functions of just TWO inhibitory synapses have ever been plotted. This took a mammoth effort of a team, to quiet everything down enough to see the transfer function of a single synapse. The BIG challenge is to keep ALL of the involved neurons alive yet shut down long enough to do this. I then wrote an paper, showing that the function needed to perform an AND NOT on probabilities was EXACTLY one of those two functions. So far, no one has advanced any theories as to what the other function does. Later, I wrote a paper explaining how that SAME function would also work on the derivatives of logarithms of probabilities, which had other desirable properties, like extreme ease of doing temporal learning. This is the sort of "glue math" that is needed, showing the wet lab researchers what to look for, and explaining what they actually find in computational terms. > ** ** > > Just as you do, I also suffer very much because of the lack of teamwork. > But the order of things I see is a little different. I see first something, > some idea, well, a principle, that can bring people together around it. > The only thing harder than herding cats, is herding different KINDS of cats - lions, tigers, housecats, cheetahs, etc. THAT is the situation here. Everyone thinks that they have THE answer, when they all need each others help, but think the other guys are too messed up to be helpful. Then, team work will naturally follow. > Not with THESE cats. > Then, funding will follow, > Funding requires the guys with the money to UNDERSTAND what they are funding. I suspect that this is hopeless. > in that order. I also see a need to improve academic standards. > What standards? I didn't think they had any. > A blog is not going to do that. I have some hopes in JAGI. > I don't. > For now, JAGI appears to be scholar, but it will have to overcome powerful > established interests to stay scholar. **** > > ** ** > > And I still don't believe we need 200 types of neurons to understand > intelligence. > It all depends on where you draw the bounaries. Our "intelligence" only has a few of those 200 different types. However, there is MUCH more to our brains than just intelligence. While it might be conceivably possible to construct an intelligent computer in the relatively near term (our lifetimes), it would necessarily lack the depth of understanding of the other things that every living thing has, but that it doesn't have. Further, it would probably lack motivation, feelings, etc., because these come from those OTHER parts of the brain that have many of those 200 different types of neurons. Even for a full implementation on some future supercomputer, the properties > of chips will be very different from the propeties of neurons, and the > implementation will be very different. > Yes, but the MATHEMATICS will be the SAME. The rest is just "implementation details". Unfortunately, we don't yet have a handle on the mathematics. > Now, if the goal is Neuroscience, then every detail about the brain > matters. Fortunately, that's not what I am doing. > The problem is that without a LOT of those details, people aren't going to be doing ANYTHING useful. > ** ** > > One way to help the three disciplines come together is to do simple > things, like input/output for a piece of a retina of some animal. Something > that everyone to understand. > Even the "simple" first few layers of the visual cortex still hides a LOT of mysteries, not the least of which is how initially random wiring quickly self-organizes into what we see. I have been trying to convince Alan on this, but he thinks one would have > to model all details. > It is clearly NOT possible to accurately model all details, because SO many of them are simply too small to observe (which was Ben's and his expert's principle objection). I believe that it will be possible to infer much of what we cannot see, and to substitute things that will self-organize to fill in for what we can't infer. This means that uploading/downloading would probably involve: 1. Scanning and diagramming all possible details. 2. Mathematically filling in details that must be there for it to work. 3. Turn it on and let it go through a period of rehabilitation as everything learns to fill in what the uploading/downloading process missed. 4. Update the original configuration and RE-start!!! You will have no memory of the horrors of rehabilitation because what was learned from them was used to update the original scan, which was the basis of your restart. I believe EI can do it with a much simpler model, just like pixels in a TV > camera. The goal would be to predict compression in the retina, and see if > it agrees with the observed compression. > But there is SO much more going on there - continuing self-organization, adaptation around malfunctions, developing the functions needed for their ACTUAL environment, etc. You might take a look at one of my web sites http://www.FixLowBodyTemp.comwhere I have been unraveling the mysteries of the hypothalamus. This is a tiny structure, yet appears to be a separate functioning INTELLIGENCE within us that is about as bright as a PhD control systems engineer. I have learned enough to cure some conditions previously thought to be incurable, and one British web site now even credits me with saving its owner's life. It is unclear exactly how the hypothalamus does what it does, but it sure doesn't look like "logic" as we now understand it. Steve ======================= > **** > > *From:* Steve Richfield [mailto:[email protected]] > *Sent:* Saturday, June 23, 2012 2:22 PM > > *To:* AGI > *Subject:* Re: [agi] Prediction Did Not Work (except in narrow ai.)**** > > ** ** > > Sergio, > > > In this and other postings, you are making the same mistake that most > others in AGI make: > > You point to the many areas where interesting things are obviously > happening that people don't yet understand, and saying that THERE is where > we should be working, not diagramming or simulating neurosystems, etc. It > is not that you are "wrong", but rather that your view contains an oxymoron. > > What took a hundred million years of evolution and 200 different types of > neurons to make work is NOT going to be "dreamed up" by anyone here or > anywhere else. Maybe with another 1,000 years of talented mathematical > work, there might be some light at the end of the tunnel. Obviously we > don't want to wait that long, so we need to find another path. > > You are asking questions that are fundamentally mathematical in nature. > These questions would have already been solved by talented mathematicians > (there are lots of them), except that your observations are too vague to > turn observations into problems, then into questions, and then into > solutions. > > Any competent mathematician can answer a question. > > It takes a really good mathematician to transform a problem into a > question. > > It is often beyond human capability to transform an observation into a > problem. This has been and will continue to be the show stopper for AGI. > Here, you either need an AGI to design an AGI, or you need more information > than we now have. Having only my artificially enhanced intellect to apply, > I am just pointing out the "obvious", at least obvious to me, that we need > more information. Where would YOU look for more information? > > Once we have more information, it will take multidisciplinary cooperation > that doesn't now exist to get over the remaining humps. > > More comments follow...**** > > On Sat, Jun 23, 2012 at 9:34 AM, Sergio Pissanetzky < > [email protected]> wrote:**** > > Alan, > > my point didn't get through. My only contribution to science is the > inference. I believe it is important, and I feel obligated to popularize > it. > To popularize, I need to talk applications, not just pure Math.**** > > > Without the math you aren't going anywhere. Of course that math doesn't > now exist. Hence, I don't expect to see AGIs anytime soon, at least not > until the AGI community gets onto a productive pathway.**** > > > In NS, that means applications to the brain. I know a bunch of things about > the brain. They are things neuroscientists do not know. And I don't know > many of the things they do. In my mind, this calls for teamwork, not hide > in > a hole and shut up.**** > > > YES - it takes a combination of wet-lab science, math, and people like you > who look at where this is all going, all working TOGETHER.**** > > > Here are the things I know. I know the brain obeys laws of conservation, > just because it is a physical system, and I know that laws of conservation > are associated with symmetries in the physical system. I know the brain > makes invariant representations (the chair upside down is still a chair). I > would like to know if the two things are related.**** > > > Good observations and a start at formulating a problem. **** > > > I see causality in the brain. Sensory organs collect causal information (it > is still causal even if rated pulses originate from a cone). Muscles are > driven by causal commands. Neurons firing cause other neurons to fire. > There > are exceptions, I know that too. > > Causal sets have symmetries, and they obey laws of conservation, which > result in invariant representations. I have collected some experimental > evidence suggesting that these representations are the same that the brain > makes, given the same information. Should I pursue these matters further? > Or > should I just ignore the whole thing because, for example , "neurons > sometimes fire at random?"**** > > > How could a researcher distinguish "random" from "unknown function". When > you see statements like this, just chalk it up to their ignorance. > > Note that there is good evidence that we compute, especially in our visual > systems that have been most studied, with rates of changes to logarithms, > and that logarithmic curves are discontinuous are zero. Hence, even the > slightest of system noise around zero would be seen as apparently random > pulses (from the ~10% of neurons that actually produce any pulses, as most > neurons are continuously analog). Now, try to explain even this simple > concept to a neuroscientist. They are most likely to inquire about what you > have been smoking.**** > > Or because a cone on the retina gives out many > pulses instead of just one?**** > > > Maybe that is simply what is needed to work right? Understanding WHY that > is needed to work right is a MUCH harder problem.**** > > > The clue here is to pursue the big matters without getting bugged down in > the details.**** > > > I think you are saying to look at things top-down rather than bottom-up, > which I agree with. However, at some point the top and bottom must meet > before you know enough to start coding. > **** > > I am trying to say something useful about the brain that > neuroscientists can understand, without sacrificing the big picture. I feel > free to disregard details when I believe that the big picture is > independent > of those details. For example, if a cone produces a string of pulses, not > just one as I proposed, would then the brain not be a physical system? > Would > it not obey conservation laws? Would it not make invariant representations? > If I can show that a chair upside down is a chair with one pulse, would > that > be necessarily false for 3 pulses?**** > > > Until you fully understand the problem that is being solved, you can't > make ANY valid conclusions. You are now trying to think about this when the > answers simply can't be reasoned out in the present lack of understanding > of the PROBLEM. **** > > > I know even more things that concern the brain. I know that EI is not an > algorithm, and can not be implemented as a circuit or network. I am very > concerned about projects to reverse-engineer the brain and simulate it on a > computer using a program. Because they are not even looking at the right > things. They can simulate the entire brain in ultimate detail, with strings > of pulses coming from cones, with all the details of the optical nerves, > and > still not find EI!**** > > > So, how would you ever debug such a system? No, it is necessary to > UNDERSTAND the vast majority what is happening to ever get a simulation to > actually work. > **** > > Because it is not there. They ought to be looking at the > dynamics of the neurons, doing simple experiments with brain-on-a-dish or > retinas that compress, and trying to understand how it all works, before > embarking in blind efforts.**** > > > As in prior postings, this research is all funded by the Department of > Health, and they don't give a damn about computation - just diseases. In > short, I agree with you and point out that the fundamental underlying > disagreement with the "world" is very political in nature. **** > > > And so also should I. Try to apply EI to simple things, understand what > they > do, find the principle, and only then, with the principle in hand, embark > in > implementation details. > > The question is: do neurons do EI, or not? And if they do, how do they do > it? So how about some team work?**** > > > I have been trying to pull this together for decades, but STILL people > just don't "get it". Neuro-scientists just don't see any value in math that > they don't understand, Computer people can't see any value in understanding > wetware when they see their programs working entirely differently, and the > mathematicians hardly know where to start having not even been given > "clean" observations, let alone problems or questions. > > This entire area is going nowhere until we get that teamwork you > mentioned, and each of the three areas that need to come together sees the > other two areas as being completely irrelevant. Given my past efforts and > failures, I believe that we are bumping up against a fundamental limitation > of the human brain - the inability to see the value of other views of > things. This occurs in nearly every area of human endeavor, and especially > here in AGI. > > It seems EVER so obvious to me that the crop of people here aren't going > to be building any AGIs, because they are literally hiding from the very > information they need to succeed. > > Steve > > **** > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now**** > > RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57 > Modify Your Subscription: > https://www.listbox.com/member/?& > d2**** > > Powered by Listbox: http://www.listbox.com > > **** > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>| > Modify<https://www.listbox.com/member/?&>Your Subscription > **** > > <http://www.listbox.com>**** > > ** ** > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
