On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
Pei> (2) A true AGI should have the potential to learn any natural Pei> language (though not necessarily to the level of native Pei> speakers). This embodies an implicit assumption about language which is worth noting. It is possible that the nature of natural language is such that humans could not learn it if they did not have the key preprogrammed in genetically. Much data supports, and many authors would argue, that humans have preprogrammed genetically a predisposition, what I would call a strong inductive bias, to learn grammar of a certain type. It is likely that they would be unable to learn grammar nearly as fast as they do without it, indeed it might be computationally intractable even were they given many lifetimes.
I agree in general. The issue is about the nature of this key, and whether it is specific to grammar learning only.
Moreover, I argue that language is built on top of a heavy inductive bias to develop a certain conceptual structure, which then renders the names of concepts highly salient so that they can be readily learned. (This explains how we can learn 10 words a day, which children routinely do.) An AGI might in principle be built on top of some other conceptual structure, and have great difficulty comprehending human words-- mapping them onto its concepts, much less learning them.
I think any AGI will need the ability to (1) using mental entities (concepts) to summarize percepts and actions, and (2) using concepts to extend past experience to new situations (reasoning). In this sense, the categorization/learning/reasoning (thinking) mechanisms of different AGIs may be very similar to each other, while the contents of their conceptual structures are very different, due to the differences in their sensors and effectors, as well as environments. To me, language learning isn't carried out by a separate mechanism, but by the general thinking process, since the task is the same: using certain concepts (words, phrase, sentences, ...) in the places of other concepts (mental images, internalized actions, as well as their general and compound forms). In summary, as far as the processing mechanism is concerned, any AGI should have the power to learn any language. However, without a human body and human experience, I don't think it will ever be able to use the language as a native speaker. It will learn and comprehend the word "cat" to an extent, though never the same as a human being --- even human beings don't have it exactly the same way. Of course, for any concrete language, it is probably always possible to develop a special-purpose mechanism, which will handle the language better than an AGI. As far as efficiency is concerned, I don't know how much difference it will make.
Moreover, it is worth noting the possibility that the amount of computation that might in principle be necessary for learning a "natural language" can't be bounded as one might think. Historically, natural language was a creation of evolution (or of evolution plus human ingenuity, but since humans were a creation of evolution, and in my view evolution may often work by creating mechanisms that then lead to ``or make" other discoveries, we can just consider this for some purposes as a creation of evolution.) Thus, you might posit that the amount of computation necessary for learning a natural language is bounded by the (truly vast) amount of computation that evolution could have devoted to the problem. *But this does not follow*. Evolution did not "learn" natural language; it created it. To the extent that language is an encryption system, evolution thus *chose* the encryption key, it did not have to decrypt it.
Well, I'd rather not take language as an encryption system, in the sense that each word and sentence has a "true meaning", independent to the language, and that to learn the language means to build a mapping between words and their denotations. This semantics, to me, its the root of many problems in language learning.
Thus in principle at least, learning a natural language without being given the key could be a very hard problem indeed, not something that even evolution would have been capable of.
Again, I fully agree that there is a "key", but I don't think it in the sense of an encryption key.
This is discussed in more detail in What is Thought?, ch 12 I believe.
I agree to many points you made about how the human mind gets its ability. However, I'm still not convinced that an AGI must take the same path. To me, an AGI only needs to be similar to the human mind in certain (though important) aspects, rather than in all aspects, therefore how the human mind gets here is not necessarily the most efficient way for an AGI to be designed. Pei Wang
Eric Baum ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
