Mike, No one I have talked to thinks that AGI could work without some kind of Input-Output modality. So even though someone might think that the processes of mathematics (or computational linguistics in the case of Lakoff) might be used to effectively model the mind's ability to think (or to hold and process effective methods of communication and decisions about actions), they are not denying that genuine learning (of some sort) must take place through some kind of Input-Output modality. So a program might use both mathematics and visual analysis without being in conflict.
Just because a computer program is "programmed" ahead of time, it does not mean that the program cannot -ever- act as if it was being modified by the experiences it dealt with through its Input-Output modalities. So even if I were to program an AGI program so that it always tried to get the user to continue to interact with it, it still could learn to override this basic rule if I tried to design it to be an AGI program. It might try and develop a strategy where it learned to delay a conversation in order to make it more likely that it could continue it at a later time. So from a casual observation it might seem as if it was -not- trying to continue a conversation even when it was. This should make sense to you and I hope we can move past any echoes of the idea that a "programmed" computer can never learn anything new because it can only follow its instructions. A program can be designed explicitly so that it can try new things. The problem is that a program that tried to do that intelligently would become very complicated very quickly. As it acquired new information and tried to develop new insights it would effectively make its programming more complicated then it was originally. A computer that reacts to input (in a certain way) will act as if its programming was being constantly modified. So just as I could write a program which would allow me to effectively program as I played a game with it, I could also write a program that would allow anyone who played the game to "program it" this way as well. As you might imagine, trying to anticipate all the ways that a program like this could be modified is too much of a challenge for a person to anticipate. So the challenge is super complicated. The only way that game programmers can deal with this is by limiting the ways that the AI might "learn" new things. > Ben, > > On Mon, Jul 23, 2012 at 7:13 PM, Mike Tintner <[email protected]>wrote: > Ben, > > Did you read it in the proper order, so to speak (hard to do from the > layout)? i.e. starting with *my* post and his reply? > > I don’t think there’s any doubt that he is replying to, and confirming my > position – wh. is a general point about how the brain works, and how image > schemas inform and control many different kinds of action, incl. cognition > and representation. > > It’s true that at almost every point, Lakoff and his many > followers/colleagues seek to find computational instantiations of their > ideas. > > My impression is that these attempts are always misguided – and invite the > kind of response you have made, – for they do IMO “betray” or certainly > distort the guiding image schema inspiration – and the idea of mapping > schemas onto each other. (I’d like to discuss this with him/them – and may > use your reply as an opportunity). > > But I don’t think there can be any doubt that Lakoff & co do see image > schemas as central as I have outlined (and don’t see them as mathematical) > – and that while they may seek to be computational, their primary loyalty > is to the biological and science. > > *From:* Ben Goertzel <[email protected]> > *Sent:* Monday, July 23, 2012 11:54 PM > *To:* AGI <[email protected]> > *Cc:* AGI <[email protected]> > *Subject:* Re: [agi] Image schemas control all forms of action [Lakoff > replies] > > Mike, > > Lakoff's reply to you is not about "image schema" but rather about > "process schema" , specifically naranyan's x-schema > * > * > *naranyan's *x-*schema* are "a graph-based, token-passing formalism based > on stochastic Petri nets" > > > http://www1.icsi.berkeley.edu/~snarayan/CFN-NCPW04.pdf > > These x-schema are an abstract mathematical formalism, and not > intrinsically "imagistic" > > Naranyan uses x-schema as a bridge btw language, action, perception and > reasoning -- much as opencog uses its atomspace model in this role > > Ben G > > -- > Ben Goertzel > http://goertzel.org > ### Sent from my mobile; plz forgive any typos or excessive concision ... > > On 24 Jul, 2012, at 5:17 AM, "Mike Tintner" <[email protected]> > wrote: > > > > *From:* George Lakoff <[email protected]> > *Sent:* Monday, July 23, 2012 10:11 PM > *To:* Mike Tintner <[email protected]> > *Subject:* Re: [Cogling-L] The scope of image schemas > > Narayanan's X-schemas (or process schemas) characterize all events and > actions and actually control physical actions. So you're right about that. > We are now working on entity schemas, but we're not there yet. > > George > > On Sun, Jul 22, 2012 at 11:34 AM, Mike Tintner > <[email protected]>wrote: > >> Lakoff:The idea behind image metaphors is simple. Images are structured >> by image >> schemas. A given image has multiple image schemas linked via neural >> binding >> to form a composite image schema ? or more than one. Metaphors map one >> image to another by mapping the source image schemas to the identical >> image >> schemas in the target >> >> George, >> >> Your exposition was v. useful. Can you/should you not extend the scope of >> image schemas? They structure presumably under >> >> *Images* : both >> >> *Verbal Images* & >> *Graphic/Photographic/Sensory Images*. >> >> and not just word images but : >> >> *Words/Language/Concepts" - period; *all words* are structured by image >> schemas, no? >> >> And from that one can one go on to argue - no? - that they structure >> >> *Moves/Movement* - period - that, for example, our reaching for a cup is >> structured by a schema. >> >> After all, language is used principally to structure actions: "Hand me >> that cup" - "Go to the other room". It makes sense that image schemas >> should structure not just verbally-mediated action, but all action, however >> mediated. The same mirror neurons that respond to (image-schema-structured) >> verbal accounts of action, also respond when just watching direct sensory >> images of agents executing those actions. >> >> Concepts/schemas arguably structure all the actions of living creatures. >> >> Comments? >> >> P.S. Personally, I think it's helpful to think of image schemas as >> "[loose] outlines" - esp. in connection with actions. Comments? >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
