I'm still not understanding what you mean by multiversal computing - except that whatever it is, it's going to be way too complicated - and missing the point.
What I should have also said is this: interpreting language/concepts is fundamentally CREATIVE - (and not just handling GENERAL concepts) and I know of no approach to language that isn't the *opposite* - trying to pin down the "right" meaning. of language.- rationally/ formulaically. put that another way: language is INTERPRETIVE and the opposite of every AI approach to language, which all see language as something to be DECODED [The issue BTW isn't: what is GO TO THE KITCHEN *not* about? but rather what *is* it about? how do you GO to wherever?] So when you are faced with a verbal instruction, it is NOT like doing a maths calculation, or any form of decoding like anagrams or NLP. You have to create an interpretation of the concepts - to put together an interpretation (or possibly multiple interpretations) on an ad hoc basis - in this case, working out a route to the kitchen, which will typically be a *new and different* route, not a formulaic variation of an old one. You have to put steps and paths together as you go along. It is simply impossible to have a preplanned formulaic approach to GOING TO THE KITCHEN - or *any other verbal instruction*.- that will automatically work out a best perfect route to the kitchen. And your interpretation will not be "right" or "wrong" but "good" or "bad." depending on what and whose criteria you choose to apply. And you can change it later and try different routes. It doesnt matter whether we're talking about enacting language or merely reading it. AI's approach to reading is totally divorced from real language users. There isn't one "right" interpretation of, say, Hamlet or Avatar or The Great Gatsby or what-Obama-meant-about-NSA - there are probably a million interpretations of Hamlet, and hundreds of thousands of Gatsby and thousands of Avatar. And in the arts and letters, people don't get "right" interpretations, they delight in - and the main point is to - produce ever NEW interpretations. In maths and AI there is always a right answer (unless you're creating new maths). That's because you're always producing OLD solutions to problems. You don't get creative about 22 + 22 = ? It's always supposed to be the same old 44. But in language you're always supposed to be producing a new interpretation, a new instantiation. You seem to be bemoaning that we - you and I and others - have different interpretations of verbal instructions and statements, when the wonder of language is that it enables this - it enables endless creativity. On 12 August 2013 17:41, Steve Richfield <steve.richfi...@gmail.com> wrote: > Jim, > > I can't tell whether you are saying something a little different than you > mean, or I failed to communicate. Hence, at some risk of addressing > something other than you are thinking, I will attempt to clarify... > > Past AI approaches have attempted to identify which of several potential > meanings were intended. However, even identifying the potential meanings is > fraught with problems because language is granular, so in general it is NOT > possible to precisely state any particular meaning, unless it just happens > to coincide with that which is sayable with a particular vocabulary. This > is actually quite rare, and even when it does occur, the meanings in your > lexicon are doubtless different than the meanings in my lexicon, so by the > time it reaches my brain, the meaning has permuted. > > I am suggesting to look at this "from the other side", of identifying what > is clearly NOT being said, which is insensitive to granularity and such > things. To use one of your examples, "Go to the kitchen." does NOT say to > go to the bathroom, or the bedroom, or the hallway, or the basement, or the > attic. There might be several kitchens on different floors, or a > kitchenette setup in the corner of a large bathroom, or a restaurant named > "The Kitchen" next to my home, or a room that I have never explored that > might contain a kitchen, or other places that I can't even now imagine, > **ALL** of which are included in this statement, simply because they were > not excluded. Looking at the context of other statements that also have > many potential meanings described by what they are NOT saying, it will > probably (but not definitely) resolve which "kitchen" is the intended one, > **WITHOUT** having to form an accurate world model of the universe, as past > AGI approaches have absurdly proposed. > > There is no reason that this couldn't be programmed, though no one has yet > done the design. I see no reason why multiversal "thinking" is beyond > programming, other than present-day computers probably being too puny to do > such things. > > The implied question in my posting is "Does this make sense?" as a way to > view language. Can you think of any statements where this sort of exclusive > POV would lead analysis astray? > > Steve > =================== > > On Mon, Aug 12, 2013 at 9:07 AM, tintner michael <tint...@blueyonder.co.uk > > wrote: > >> You seem to be trying to break out of narrow AI but not quite making it >> (your post is a bit confusing) >> >> The essence of language/a-conceptual-system is that it is GENERAL, not >> SPECIFIC. Everything in narrow AI, everything in computing so far is >> SPECIFIC.. >> >> Every concept is general, open-ended. >> >> Any conceptual instruction - >> >> GO TO THE KITCHEN >> HAND ME THAT CUP >> READ THAT BOOK >> >> is infinitely open-ended. There is no specific route (or set of specific >> routes) to the kitchen for the speaker/thinker to GO along. Either in this >> particular house, or the infinite diversity of domestic settings that the >> concepts can refer to. >> >> The whole point of a conceptual system is that it gives you infinite >> freedom and flexibility - albeit within constraints. You can GO along any >> of many routes but they must have some relation to the/a kitchen. >> >> Ditto you can follow a general, infinitely diverse range of routes in >> HAND-ing or in READ-ing a book. >> >> That's the whole damn point of a conceptual system. AGI-ers don't get it, >> anymore than they understand that AGI is about creativity - doing something >> NEW and not something old - wh. is what all narrow AI progs do. >> >> A conceptual system - being general is-the-essential-medium-of >> creativity. If you can GO along an infinite diversity of routes, then you >> have licence to find new ones. >> >> Alll this comes down in the end to common-sense. Take any set of >> words/concepts in the language - and try and interpret them specifically - >> constrain them so that there is, say, only one set of ways to GO to the >> kitchen, or toilet, or movies. Ain't possible. Ain't meant to be. Jim, our >> friendly database dinosaur, will keep trying. But it will never work. >> >> >> On 12 August 2013 16:45, Steve Richfield <steve.richfi...@gmail.com>wrote: >> >>> Supposing for a moment that we think in the multiverse, and that >>> language carries multiversal meanings between disparate world models, the >>> following train of logic says that statistical disambiguation is a really >>> bad idea, except maybe in some odd applications like low-quality >>> translation w/o footnotes... >>> >>> Everyone has been presuming that language states specific meanings, but >>> lets turn this around and presume that a given statement simultaneously >>> means EVERYTHING that it could conceivably mean, complete with the usual >>> Bayesian weightings. Other surrounding statements would also simultaneously >>> mean everything that they could conceivably mean, but most of those >>> meanings would be incongruous with the meanings of surrounding statements, >>> so that only a few, and maybe/hopefully only one meaning for an entire >>> passage would emerge. Where several/many meanings survive, live with it, >>> because that is the way of the multiverse. >>> >>> If only one of the meanings proves helpful in guiding subsequent action, >>> and disambiguation were perfect, then this would work the same as a >>> statistical disambiguation approach. In the vast majority of cases parallel >>> erroneous interpretations would be no problem, because their erroneous >>> qualifiers would specify non-existent leaves on the world-model tree, and >>> so would affect nothing. >>> >>> HOWEVER, where multiple applicable meanings survive overall analysis, >>> the presence of competing meanings would greatly reduce the reliability of >>> any particular meaning, which is EXACTLY what is needed and what >>> statistical disambiguation does NOT do. >>> >>> As evidence of this process, consider the quote "Try not. Do, or do not. >>> There is no 'try'." from *The Empire Strikes Back*. This refers to all >>> of a person's endeavors. Without multiversal meanings, this would be quite >>> difficult to explain. With multiversal meaning, the intended meaning is >>> quite obvious without further explanation. >>> >>> So, a blank page would simultaneously be simultaneously saying >>> everything that could conceivably be said - a sort of selection of your >>> entire world-model. Then, you put some words on the page, and in the >>> process you zero in on a particular branch in your world-model tree. You >>> put more words on the page, and in the process select particular leaves in >>> your world model. >>> >>> Your world model is probably as different from mine as two different >>> trees can be different from one another. So, of course a given set of words >>> doesn't lead to the "same" branch in our two world models, because there is >>> no "same branch". This is no problem, because your words will find ALL of >>> the branches that "fit", and **NOT** just a particular branch that seems to >>> be the "best fit". >>> >>> This suggests an ENTIRELY different approach to programming to >>> "understand" natural language. >>> >>> Any thoughts? >>> >>> Steve >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > > > -- > Full employment can be had with the stoke of a pen. Simply institute a six > hour workday. That will easily create enough new jobs to bring back full > employment. > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com