There is no big depth in the language. There is only depth in the information (i.e. patterns) which is transferred using the language.
Human language seems so magical because it is so ambiguous at a first view. And just these ambiguities show that my model of transferred patterns is right. An example: "Yesterday I saw a big dog next to a tree. It seemed to be very angry." Who is "it"? The tree or the dog? I think, most people would answer: the dog. The reason why we can resolve this ambiguity is that people have similar patterns each for trees and dogs and angry. So when you assume the model of transferred patterns you see that there is no big intelligence necessary to understand language. The intelligence lies only in the brain's representation of the dog and the tree and the emotion angry . But these representations depend hardly on language. Only if you use a real time brain scanner then the process of the creation of language is very interesting. Because if you say: "I feel happy" then there must be a flow of information from your internal representation of happiness to the spoken sentence. Hopefully there will be the day where we can backtrack this process in order to understand how patterns in the brain are implemented. But a black box language understanding itself will show you only a few hints about internal representations just as understanding XML-strings coming from computers show only a few hints about their databases and algorithms. Therefore I think, the ways towards AGI mainly by studying language understanding will be very long and possibly always go in a dead end. >>> Andi [EMAIL PROTECTED] wrote And what's up with language as just a communication protocol? I'm right now going through the Teaching Company class on linguistics, and I'm kind of surprised by the many interacting layers and depth in language. And how there is intricate contraint satisfaction going on between all the levels. It's not a simple thing in any way, and it sure looks AI-complete to me. I mean, it could just come down to intelligence really just being a system of communication between modules. andi Ben wrote: > I am well aware that building even *virtual* embodiment (in simulated > worlds) is hard.... > > However, creating human-level AGI is **so** hard that doing other hard > things in order to make the AGI task a bit easier, seems to make sense!! > > One of the things the OpenCog framework hopes to offer AGI developers is a > relatively easy way to hook their proto-AGI systems up to virtual bodies > ... > saving them of doing the software integration work... > > Integration of robot simulators with virtual worlds, as I've been > advocating, would make this sort of approach even more powerful... > > -- Ben G > > On Sat, Oct 18, 2008 at 3:45 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> > wrote: > >> I think embodied linguistic experience could be **useful** for an AGI >> to >> do mathematics. The reason for this is that creativity comes from usage >> of >> huge knowledge and experiences in different domains. >> >> >> >> But on the other hand I don't think embodied experience is necessary. It >> could be even have some disadvantages. For example, we can think in 3d >> spaces much better than in spaces of dimension n. But for science today, >> 3d-mathematics is less needed than mathematics of n-dimensional spaces. >> >> >> >> An AGI which gets nothing else than pure mathematical experiences in >> arbitrary mathematical spaces which we give the AGI by our mathematical >> definitions, could even have an important advantage against an AGI which >> is >> full of 3d patterns because of its 3d embodied experiences. >> >> >> >> I suppose the 3D vs. nD subject is just one of many examples one could >> find. But the main reason against embodied linguistic AGI for first >> generation AGI is the amount of work necessary to build it. I do not >> think >> that the relation of utility vs. costs is positive. >> >> >> >> - Matthias >> >> >> >> >> >> >> >> >>> >> Ben Goertzel wrote: >> >> >> That is not clear -- no human has learned math that way. >> >> We learn math via a combination of math, human language, and physical >> metaphors... >> >> And, the specific region of math-space that humans have explored, is >> strongly biased toward those kinds of math that can be understood via >> analogy to physical and linguistic experience >> >> I suggest that the best way for humans to teach an AGI math is via >> first >> giving that AGI embodied, linguistic experience ;-) >> >> See Lakoff and Nunez, "Where Mathematics Comes From", for related >> arguments. >> >> -- Ben G >> ------------------------------ >> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/> | >> Modify<https://www.listbox.com/member/?&>Your Subscription >> <http://www.listbox.com> >> > > > > -- > Ben Goertzel, PhD > CEO, Novamente LLC and Biomind LLC > Director of Research, SIAI > [EMAIL PROTECTED] > > "Nothing will ever be attempted if all possible objections must be first > overcome " - Dr Samuel Johnson ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com
