If you can build a system which understands human language you are still far away from AGI. Being able to understand the language of someone else does no way imply to have the same intelligence. I think there were many people who understood the language of Einstein but they were not able to create the same.
Therefore it is better to build a system which is able to create things instead of only being able to understand things. Language understanding is easy for nearly every little child. But mathematics is hard for most people and for computers today. If you say mathematics is too narrow then this implies either the world cannot be modeled by mathematics or the world itself is too narrow. - Matthias >>> Andi wrote: Matthias wrote: > There is no big depth in the language. There is only depth in the > information (i.e. patterns) which is transferred using the language. This is a claim with which I obviously disagree. I imagine linguists would have trouble with it, as well. And goes on to conclude: > Therefore I think, the ways towards AGI mainly by studying language > understanding will be very long and possibly always go in a dead end. It seems similar to my point, too. That's really what I see as a definition of AI-complete as well. If you had something that could understand language, it would have to be able to do everything that a full intelligence would do. It seems there is a claim here that one could have something that understands language but doesn't have anything else underneath it. Or maybe that language could just be something separated away from some real intelligence lying underneath, and so studying just that would be limiting. And that is a possibility. There are certainly specific "language modules" that people have to assist them with their use of language, but it does seem like intelligence is more integrated with it. And somebody suggested that it sounds like Matthias has some kind of mentalese hidden down in there. That spoken and written language is not interesting because it is just a rearrangement of whatever internal representation system we have. That is a fairly bold claim, and has logical problems like a homunculus. It is natural for a computer person to think that mental things can be modifiable and transmittable strings, but it would be hard to see how that would work with people. Also, I get a whole sense that Matthias is thinking there might be some small general domain where we can find a shortcut to AGI. No way. Natural language will be a long, hard road. Any path going to a general intelligence will be a long, hard road. I would guess. It still happens regularly that people will say they're cooking up the special sauce, but I have seen that way too many times. Maybe I'm being too negative. Ben is trying to push this list to being more positive with discussions about successful areas of development. It certainly would be nice to have some domains where we can explore general mechanism. I guess the problem a see with just math as a domain is that the material could get too narrow a focus. If we want generality in intelligence, I think it is helpful to be able to have a possibility that some bit of knowledge or skill from one domain could be tried in a different area, and it is my claim that general language use is one of the few areas where that happens. andi ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com
