Steve,
You are wrong in nearly all counts. You are talking about brain implementation (how does the brain do it), and brain reverse-engineering (simulate all parts of the brain for medical purposes). I am not interested in either. My interest is to implement the causal theory on a computer and compare function with corresponding function in the brain. The brain enters only as a bechmark. My interest is in simulating function, not ion channels. My AGI is one that functions like the brain, not one that duplicates each part of which the brain is made. Of course I am curious about how the brain does it, and I listed some challenges for that, but they are for brain implementation and will not affect function. We know there is GI because the brain is doing it. There are no computers doing it. The key to general intelligence is removing the excess entropy, which leads to self-organization. You are correct on this one. The brain does it, but I never heard of a computer doing it, except for mine (Google's patents are for video only). The implementation of causality has already been achieved, I am doing it, and it should scale. Either you didn't follow my postings or you lack the necessary background. My work is new, unrelated to previous approaches, I grant it is not easy to follow. I suggest you check my website. Here is how I work: when questions arise, such as this one, I write an article and post it on my website. You must understand that I can't write the whole thing on this blog again and again. And even then, it may happen that nobody reads it. I'll be glad to point you to the right article if you have questions, or write another one if needed, but be mindful that this will take some time. Sergio From: Steve Richfield [mailto:[email protected]] Sent: Friday, August 31, 2012 10:15 AM To: AGI Subject: Re: [agi] LM741 Sergio, We know that there IS a solution to the GI problem, because there are "computers" already doing it, albeit with 10^11 "processors" each. You have listed many challenges, which only illustrates how much we do NOT know or can even guess right now. The whole thing is self-organizing, and we don't even know what glial cells do, that are ~90% of your brain. In our current state of extreme ignorance, guessing about the implementation of causality, etc., seems hopeless. Note that there are probably SEVERAL independent information channels in neurons and synapses - one for each type of ion that travels around as they operate. Further, these are the equivalent to "current loop" communication in that they are VERY noise-tolerant. Who knows WHAT all these channels do? Steve ============== On Fri, Aug 31, 2012 at 6:53 AM, Sergio Pissanetzky <[email protected]> wrote: Steve, Just so we are on the same page, we are discussing an implementation issue, how does the brain implement the theory of causality. The theory itself is bottom-up. It does not depend on implementation, and can have many different implementations. I see some problems with the feedback model you are considering. A system with many feedback loops is still a circuit, and a circuit can not dissipate entropy unless it has some dissipative elements. Energy does dissipate in the axons and dendrites of neurons, indeed, but there is also an influx of chemical energy that quickly replaces the lost energy. A more detailed analysis is needed to determine whether any actual dissipation of entropy takes place. Another problem I noticed, is that entropy needs to be dissipated from the information itself, that is, directly from the memory that holds the information. This must be a mechanical effect, a circuit will not do it. In my model, entropy is dissipated by having short neuronal connections, a mechanical effect. Short connections is one of the pillars for conjecturing that the brain implements the theory (but not for the theory itself). The recent paper on Neuroscience about the 2/3 power law for the length (rather than the previous 4/3), which happens to be precisely the theoretical minimum for the length, and is supported by considerable experimental evidence, provides critical confirmation for my model. Particularly because the authors know zip about the entropy or about my theory, and were not even looking into that. There is still another problem. Neurons activated by an event remain activated only by a short interval of time (0.1 sec?). By the time the feedback comes back (unless the loop is really short), the neuron will be processing a different event and have no idea what the feedback is about. On the other hand, the causal theory does require feedback, but of a very different nature: for the reuse of neural cliques. Think of a neural clique as a subroutine in a program. You call the subroutine many times, from different places in the program. Does this happen in the brain? Absolutely! fMRI confirms the presence of areas in the brain that get activated by many different process es. And this is another point of agreement between theoretical predictions and actual observation. Anyway, all these different issues are arguments in favor of a research institute. There remain many thorny issues, for example the paper assumes that the target points are given, but they depend on the information being stored. Will Hebbian learning do that? Sergio From: Steve Richfield [mailto:[email protected]] Sent: Thursday, August 30, 2012 3:26 PM To: AGI Subject: Re: [agi] LM741 Sergio, On Thu, Aug 30, 2012 at 11:55 AM, Sergio Pissanetzky <[email protected]> wrote: Steve, Alright, I won't complain anymore, we are all in the same game. STEVE> My assertion is that it is probably IMPOSSIBLE to understand many of the aspects of intelligence (like self-organization) ... SERGIO> I agree. STEVE> ... without heavy math, SERGIO> Heavy math won't help, and that is part of the problem: you can't prove anything or convince anyone by traditional math methods. We may have a slightly different understanding of "heavy math". To my mind, it generally involves developing new techniques, representational methods, etc, whereas "light math" simply applies well known methods to the problems at hand. Many math PhDs are only able to apply what they have already learned, so they will perpetually live in the world of "light math". Hence, it is really hard to even talk about "heavy math", because like the internals of functioning AGIs, we really don't yet know what we are talking about. There is no theorem for self-organization. ... yet. The theory is a conjecture, it is a theory of nature, and it is falsifiable as every theory of nature is. The only thing that really matters: does it work or not. Yes, plus it would be interesting to know if WE work the same way. But we need to get there, to the "does it work or not" part. It works very well for a few stupid experiments that I was able to carry out. Conclusion? The theory is not good because Sergio has a small computer? Obviously, any self-respecting research organization would have a petaflop supercomputer on the Internet for its members to use. I quote Boris replying to Alan's peg & holes challenge on Aug 21, 2012: "If my algorithm does your pegs & holes because I specifically designed it to do so, then the success won't tell you anything about its ability to scale beyond that. And if it does so as a trivial side-effect of general learning, then I won't be posting here. My point is, if you need "evidence" (that is, can't evaluate an approach theoretically), then you are a crackpot, in GI terms." There is a VAST "gray area", which springs largely from the complex and unconstrained feedback mechanisms, where theories are testable but unprovable. Further, there is plenty of evidence that brains are (and AGIs must be) finely "tuned" to work well, and tuning in complex environments has so far defined most attempts to deeply analyze. We have MANY concentric feedback loops, starting with ones within neurons to constrain their operation, retrograde flow of information, and larger controlling loops. Each level would have to be well-understood to even start designing the next level. Algorithms don't scale because they accumulate too much entropy as they learn and grow. Agreed. Brains scale when they learn because they get rid of the entropy, and keep on learning. It's simple Physics. Note the absence of astronomically sized brains in nature. I suspect that there are some fundamental limits that we now can scarcely imagine. So where do we go from here? To the "does it work or not" part. Note that the entire field of Physics is a constant "does it work or not." Physics gets applied to new fields all the time, new experiments, spins off new technologies, and that is useful. But the minute something seems not to work . you know, recall recent neutrino experiments. Yes. People need to stop attaching their egos to their proposals. Reflecting on what is different between present AGI experimental approaches and what I foresee: 1. To be doing anything useful, each component must be manipulating some quantity that has value, significance, and dimensionality. That is basic math and physics, but AGI folks seem to ignore this ever-so-basic concept, instead betting on some unknown sort of numerology to make things work. 2. Real-world learning is FAST, like nearly instantaneous. We now don't know how to do this. Until there is SOME good story as to how this might be done, there is no starting point for AGI. There are several other similarly basic things we do that are now beyond our understanding of ANY way to potentially accomplish. This stuff is ever SO basic, yet the only answers I get from AGIers (Ben and I have had several exchanges about this) is that they "feel" that they can somehow work past these challenges, as though this is "just" a debugging problem. Then there are those who completely reject mathematics, and along with it #1 above, without realizing that with math goes the ability to program anything at all. These people don't realize that they have placed themselves into a perfect no-win situation. Steve ========================= From: Steve Richfield [mailto:[email protected]] Sent: Thursday, August 30, 2012 12:03 PM To: AGI Subject: Re: [agi] LM741 Sergio, On Thu, Aug 30, 2012 at 8:04 AM, Sergio Pissanetzky <[email protected]> wrote: Just look what happened to the much heralded Santa Fe Institute. They sure did a great deal of useful research, but they did not explain self-organization - their original objective - and they sure did not explain intelligence. My assertion is that it is probably IMPOSSIBLE to understand many of the aspects of intelligence (like self-organization) without heavy math, wet lab experimentation, new scanning technology, and/or other out-of-discipline research. If nothing else, the last half-century has clearly shown that there are no easy answers, no "low hanging fruit" to gather. Plenty of people just as smart as us have dashed their careers by trying to "reason things out" without the advanced tools to simply examine the solution. I have enough of a sense of history not to do the same. Whether a research center could accomplish what lots of bright people have failed to do remains to be seen. In the interest of claiming the "scientific method" to arrive at an answer, we should list ALL of the alternatives, now that we know that general intelligence is NOT an easy problem. So, does anyone here see OTHER approaches to dealing with such hard problems? One of my fears regarding a research center is that it would be ever SO easy to mismanage such a thing. Here are a few looming errors waiting to be made: 1. Proceeding with woefully inadequate funding. 2. Putting all of the funding into a small number of high priced efforts, while starving countless small-dollar efforts that could conceivably blow the lid off of AI/AGI. 3. Throwing all the money at projects promising near-term payoffs, so that if/when they fail, the research center dies. 4. Starving areas that the management think won't bear near-term fruit, so there is nothing to fall back on if needed. 5. Failing to perform any sort of competent feasibility analysis before throwing big money at things. 6. Heavily funding the things that the management thinks are most important, and letting everything else starve. There are multiple prospective goals, many ways to proceed toward each goal, and unknown pitfalls waiting to sabotage every approach. This should be professionally managed like any other BIG project. IBM pioneered the technique of putting 2 or 3 teams to designing the SAME piece of critical equipment, and selecting the "winner" at the last minute. I would expect to see some of the same sorts of management approaches at a research center. In the meanwhile, I am being ignored, accused, even insulted. Is there anyone on this forum who does NOT feel ignored, accused, and insulted? I don't care. I am right, the others are wrong. And this is all that maters. Is there anyone on this list who does NOT believe that they are right, and that is all that matters? Such is the price of greatness. Steve AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. AGI | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> | <https://www.listbox.com/member/?& ad2> Modify Your Subscription <http://www.listbox.com> ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
