Mike,

More than multiplicity is the issue of discrete-point semantics vs.
continuous real-world possibilities. Multiplicity could potentially be
addressed by requiring users to put (clarifications) following unclear words
(e.g. in response to diagnostic messages to clarify input). Dr. Eliza
already does some of this, e.g. when it encounters "If ... then ..." it
complains that it just wants to know the facts, and NOT how you think the
world works. However, such approaches are unable to address the discrete vs.
continuous issue, because every clarifying word has its own fuzziness, you
don't know what the user's world model (and hence its discrete points) is,
etc.

Somewhat of an Islamic scholar (needed for escape after being sold into
servitude in 1994), I am sometimes asked to clarify really simple-sounding
concepts like "agent of Satan". The problem is that many people from our
culture simply have no place in their mental filing system for this
information, without which, it is simply not possible to understand things
like the present Middle East situation. Here, the discrete points that are
addressable by their world-model are VERY far apart.

For those of you who do understand "agent of Satan", this very mental
incapacity MAKES them agents of Satan. This is related to a passage in the
Qur'an that states that most of the evil done in the world is done by people
who think that they are doing good. Sounds like George Bush, doesn't it? In
short, not only is this definition, but also this reality is circular. Here
is one of those rare cases where common shortcomings in world models
actually have common expressions referring to them. Too bad that these
expressions come from other cultures, as we could sure use a few of them.

Anyway, I would dismiss the "multiplicity" viewpoint, not because it is
wrong, but because it guides people into disambiguation, which is ultimately
unworkable. Once you understand that the world is a continuous domain, but
that language is NOT continuous, you will realize the hopelessness of such
efforts, as every question and every answer is in ERROR, unless by some
wild stroke of luck, it is possible to say EXACTLY what is meant.

As an interesting aside Bayesian programs tend (89%) to state their
confidence, which overcomes some (13%) of such problems.

Steve Richfield
=================
On 12/1/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Steve,
>
> Thanks. I was just looking for a systematic, v basic analysis of the
> problems language poses for any program, which I guess mainly come down to
> multiplicity -
>
> multiple
> -word meanings
> -word pronunciations
> -word spellings
> -word endings
> -word fonts
> -word/letter layout/design
> -languages [mixed discourse]
> -accents
> -dialects
> -sentence constructions
>
> to include new and novel
> -words
> -pronunciations
> -spellings
> -endings
> -layout/design
> -languages
> -accents
> -dialects
> -sentence constructions
>
> -all of which are *advantages* for a GI as opposed to a narrow AI.  The
> latter wants the "right" meaning, the former wants many meanings - enables
> flexibility and creativity of explanation and association.
>
> Have I left anything out?
>
> Steve: MT::
>
>>  I wonder whether you'd like to outline an additional list of
>> "English/language's shortcomings" here. I've just been reading Gary Marcus'
>> Kluge - he has a whole chapter on language's shortcomings, and it would be
>> v. interesting to compare and analyse.
>>
>
> The real world is a wonderful limitless-dimensioned continuum of
> interrelated happenings. We have but a limited window to this, and have an
> even more limited assortment of words that have very specific meanings.
> Languages like Arabic vary pronunciation or spelling to convey additional
> shades of meaning, and languages like Chinese convey meaning via joined
> concepts. These may help, but they do not remove the underlying problem.
> This is like throwing pebbles onto a map and ONLY being able to communicate
> which pebble is closest to the intended location. Further, many words have
> multiple meanings, which is like only being able to specify certain disjoint
> multiples of pebbles, leaving it to AI to take a WAG (Wild Ass Guess) which
> one was intended.
>
> This becomes glaring obvious in language translation. I learned this stuff
> from people on the Russian national language translator project. Words in
> these two languages have very different shades of meaning, so that in
> general, a sentence in one language can NOT be translated to the other
> language with perfect accuracy, simply because the other language lacks
> words with the same shading. This is complicated by the fact that the
> original author may NOT have intended all of the shades of meaning, but was
> stuck with the words in the dictionary.
>
> For example, a man saying "sit down" in Russian to a woman, is conveying
> something like an order (and not a request) to "sit down, shut up, and don't
> move". To remove that overloading, he might say "please sit down" in
> Russian. Then, it all comes down to just how he pronounces the "please" as
> to what he REALLY means, but of course, this is all lost in print. So, just
> how do you translate "please sit down" so as not to miss the entire meaning?
>
> One of my favorite pronunciation examples is "excuse me".
>
> In Russian, it is approximately "eezveneetsya minya" and is typically
> spoken with flourish to emphasize apology.
>
> In Arabic, it is approximately "afwan" without emphasis on either syllable,
> and is typically spoken curtly, as if to say "yea, I know I'm an idiot". It
> is really hard to pronounce these two syllables without emphases, but with
> flourish.
>
> There is much societal casting of meaning to common concepts.
>
> The underlying issue here is the very concept of translation, be it into a
> human language, or a table form in an AI engine.. Really good translations
> have more footnotes than translation, where these shades of meaning are
> explained, yet "modern" translation programs produce no footnotes, which
> pretty much consigns them to the "trash translation" pile, even with perfect
> disambiguation, which of course is impossible. Even the AI engines, that can
> carry these subtle overloadings, are unable to determine what nearby meaning
> the author actually intended.
>
> Hence, no finite language can convey specific meanings from within a
> limitlessly-dimensional continuum of potential meanings. English does better
> than most other languages, but it is still apparently not good enough even
> for automated question answering, which was my original point. Everywhere
> semantic meaning is touched upon, both within the wetware and within
> software, additional errors are introduced. This makes many answers
> worthless and all answers suspect, even before they are formed in the mind
> of the machine.
>
> Have I answered your question?
>
> Steve Richfield
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to