From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf page 5: --- In the current context, when we say that the human mind or an AGI system is "general purpose", we do not mean that it can solve all kinds of problems in all kinds of domains, but that it has the potential to solve any problem in any domain, given proper experience. Non-AGI systems lack such a potential. --- That paper also addressed the issue of "general potential" vs. "domain knowledge".
I agree with you that an intelligence "solving all kinds of problems in all kinds of domains" is impossible, though I don't think the conclusions of Gödel and Turing are the major reason (or even that relevant) here. My arguments are in http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf Pei On Sat, Apr 26, 2008 at 6:35 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote: > How general should be AGI? > > When I heard the term AGI for the first time, I had to think about the > general problem solver from 1959 > (http://en.wikipedia.org/wiki/General_Problem_Solver). > > It solved a few simple problems but was overstrained with real world > problems. > > Second, there is Gödel's theorem which shows that there can not be a complex > machine that > can can generate both complete and uncontradictory knowledge. > > There is another theorem that shows that 100% AGI is impossible: > Alan Turing proved in 1936 that a general algorithm to solve the halting > problem for > all possible program-input pairs cannot exist. > > On the other hand, we think that AGI is possible because we believe that we > ourselves ARE AGI systems. > But as the theorems show without any doubt: Perfect AGI is impossible. > > Of course I am convinced that there can be systems, which are far more > intelligent than humans. > But even these systems will have their limits. > > Perhaps further research of our own limits can help us to construct more > intelligent machines. > Perhaps the mechanisms behind human intelligence are so powerful because > they are not designed > to be a real general problem solver. > > Intelligence has a lot to do with recognizing regularities in patterns of > signals which are obtained from the environment. > We can see humans to be very powerful in this ability. But are we real > powerful to recogonize GENERAL regularities??? > A simple example shows, that this is by far not the case: > > We can recognize very detailed information from the environment > by our eyes. So we can think, that our optical sense is a general pattern > recognizer. But imagine you would record a soundwave of a speaking man and > visualize the wave on a screen. It would be impossible for you to recognize > the words or even the voice of the person. I am sure that even if you would > practice a child for years with these patterns it would not learn to > understand the voice and sentences. Perhaps a slow and errorful recognition > of some words. But by far not so powerful as our acousthesia. > > This shows that our optical sense is not able to recognize general patterns > of our environment. And by the way: The child would not gain conscious > phenomenons like "qualia" when analyzing the sound waves. > Our "optical intelligence pattern recognizer" is NOT AGI. It is narrow AI in > this sense. > > > > Assumption 1: > > ### Most powerful intelligence and most general intelligence are not > possible at the same time. ### > > A system which has most general intelligence will suffer from huge problems > of complexity. > So if we design an architecture which can evolve to very very general > intelligence it will be very very probably need > too many time and memory so that it can be only of theoretical interest. > So one main problem of AGI is to design it general but not too general. > And one of the main questions will be which features and domain knowledge > should be hard coded and how. > > If we define intelligence to be the ability to solve complex problems in > complex environments we should ask what > are adequate limits of complexity. > > Life can only evolve in environments with very narrow conditions. > I think this is similar with intelligence: > > Assumption 2: > > ### Intelligence can work and evolve only in environments with limited > conditions. ### > > Nature is very very complex because there are so many particles which > interacts with each other. > It is important for intelligence, that knowledge of every single particle > and fundamental laws of physics is not necessary to make predictions about > the environment > The change of day and night is an example for a regularity which can be > predicted with high accuracy with very low knowledge of details > of the environment. In a world with low structure or rapid change of > structure and regularities intelligence is for sure not possible or very > difficult at least. > Our world is a hierarchical world with encapsulated levels. You can see > regularities on high levels without the knowledge of the details below this > level. > This is certainly a key feature of our world without that intelligent life > could not evolve at all. > > So I see the following interesting questions for AGI: > What restrictions are adequate or necessary for a practical AGI system to > obtain a good compromise of general and powerful intelligence? > What are the detailed conditions of the environment, that are necessary for > intelligent systems. > > ------------------------------------------- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4 Powered by Listbox: http://www.listbox.com
