Logan, On Tue, Jun 19, 2012 at 11:32 PM, Logan Streondj <[email protected]> wrote:
> > > On Wed, Jun 20, 2012 at 2:01 AM, Steve Richfield < > [email protected]> wrote: > >> >> >> If you can figure out every single detail, and if you live long enough to >> do so, then you will succeed. However, there are several imperfect >> quasi-proofs that this will be VERY difficult. >> > > a) not every detail needs to be figured out, as there may be other > programmers helping. > Once you understand the problem at hand and have enough of a mathematical understanding to tackle it, I agree. However, there are vast Terra Incognito areas of our brains where we have NO idea what is happening. Sure we can write simplistic software to emulate some of these functions, much like the original Eliza program emulates a human interrogator, without any of the depth of function that is necessary to function in the wild and grow our capabilities. Some of my own publications have been tackling these math problems. This requires new mathematical methods, and these don't come easily. You can't just throw a smart programmer at them and expect any useful return. c) proofs of difficulty are often quite silly > True, but many of them are NOT so silly. For example, looking at the number of mutations needed to develop human-level reasoning. Many of those mutations amount to an epiphany that some programmer would have to have, and there have been many such mutations. Another measure is the number of TYPES of neurons in our brains, which is ~200. There would be no reason for these to evolve if others could do the same job. We don't even have anything resembling a comprehensive list of what is being communicated between neurons. Indeed, they are STILL discovering new ions that travel along axons that probably participate in the learning process. Note that the present lists include far more "things" being communicated than are being communicated between nodes in present software, suggesting that we are still a long way from any sort of optimality of function. Ben and others claim that the complexities in our brains are no indicator of necessary complexities in software, but evolution DOES prefer simplicity and adaptation of existing function just like programmers do. The only difference is that software can be contorted in ways that wetware can't, but Ben and others don't yet realize that the reverse is also true - that software can't (yet) practically emulate components that seek bidirectional equilibrium, and there is plenty of evidence that neurons are doing just that. I can see how to build processors that CAN do this on a large scale, but it wouldn't be anything at all like current processors. Note that electric circuit simulators like SPICE do this, but the overhead is horrendous and goes as n log n. I think I see a "term trimming" approach for the overhead to go as n with reduced accuracy, which would be needed for AGI to work, but still it would require a complete re-architecting of approaches Note that just this one detail from among SO many, if correct, would be enough to stop AGI until processor architectures take a different turn, or at least open up their internal horizontal micro-programming to external modification. Steve ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
