On Sat, Apr 26, 2008 at 10:03 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote: > In my opinion you can apply Gödel's theorem to prove that 100% AGI is not > possible in this world > if you apply it not to a hypothetical machine or human being but to the > whole universe which can be assumed to be a closed system. > > The axioms are the laws of physics. > Then, everything what happens in this world is the application of these > axioms. > And every step of this application is without any fault if we suppose that > our universe does not change the > laws of physics. > > So, with Goedel, the whole universe cannot generate a set of statements > about itself which are both complete and without any contradictions. > And therefore a machine which is part of the universe cannot have this > ability too.
I don't think it is a valid application of Goedel, because the notions of "axioms", "theorems", "statements", etc. have been used in very loose senses, and notions like "complete" and "consistent" make little sense. > But my main point was not to think about the question whether perfect AGI is > possible. I won't even call that as "perfect AGI", because it has little to do with the AGI we have been talking about. > I mainly wanted to point out, that we ourselves have strong limits and are > in a sense narrow AI systems instead of AGI systems. In the sense we have been using, we are AGI systems, even though we cannot solve all problems. > The example with the visualized sound wave shows that we use very > specialized pattern algorithms instead of general ones. > And of course biology use it for reasons of performance. Agree. > Perhaps it is possible for a human to see the patterns in a sound wave if he > has enough time. > But this would be thousand fold slower than the specialized pattern > recognizer for sound signals. Agree. > This shows that human intelligence is not build from general pattern > algorithms in the brain but from algorithms that are specialized for > patterns of a specialized environment or at least are tuned for specialized > patterns. From this raises the question whether it makes sense to think > about pattern algorithms that works with most patterns in this world. This > question is mainly a question of performance. Agree --- it is not "general" in the sense that it has nothing to do with human body and environment, but is "general" in the sense that it is not only for special types of problems in special domains. > And my point was the assumption, that we can buy general intelligence only > for hopelessly many costs of time and memory. I see your point and agree, and guess that we just use the term "general" differently. I don't use it to mean "for all possible situations", but "for many different situations". I fully agree that it cannot be "absolutely general" --- but that is not the aim of the current AGI research, anyway. Most people here just want to be "as general as the human mind", which is already much more "general" than what the current AI research is after. > Another example: > Imagine a child who makes the first experience with pain when touching a hot > hotplate in a kitchen. > The child will learn not to touch the hotplate. But this task is very hard > if you want to solve it with a general algorithm with low domain-knowledge. If you read the papers I mentioned, you'll see that we are not suggesting "a general algorithm with low domain-knowledge" like GPS at all. > If you feel pain in your hand, what was the reason? > If you think in the AGI way it could be > the open window in the kitchen, > the sandwich you have eaten an hour ago, > the fly on the desk, > your blue shirt > > ... > > and after trillions of other possible reasons > the collision of the hand with the plate which is obvious for us but > not obvious for any algorithm without domain knowledge. > > Well you can find the reason with more experience. But how many tries do you > need with AGI? Trillions! Because with the AGI approach you can by > definition not rule out anything. At least you have to use AGI learning > algorithms with massive predefined rules of generalization. So even if we > find clever AGI algorithms, the power will mainly depend on tuning it to > work in special real world problems. Agree --- to many people, including me, this is exactly what AGI is after: a baby with all kinds of potentials, not an adult that can do everything. In summary, though I agree with many of the points you made, what I mean by "AGI" is very different from yours, which is hard to avoid at the early years of a new field. Pei > -----Ursprüngliche Nachricht----- > Von: Pei Wang [mailto:[EMAIL PROTECTED] > Gesendet: Samstag, 26. April 2008 14:16 > An: [email protected] > Betreff: Re: [agi] How general can be and should be AGI? > > > From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf page 5: > --- > In the current context, when we say that the human mind or an AGI system is > "general purpose", we do not mean that it can solve all kinds of > problems in all kinds > of domains, but that it has the potential to solve any problem in any > domain, given > proper experience. Non-AGI systems lack such a potential. > --- > That paper also addressed the issue of "general potential" vs. "domain > knowledge". > > I agree with you that an intelligence "solving all kinds of problems > in all kinds > of domains" is impossible, though I don't think the conclusions of > Gödel and Turing > are the major reason (or even that relevant) here. My arguments are in > http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf > > Pei > > > > > > ------------------------------------------- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4 Powered by Listbox: http://www.listbox.com
