Steve,

Some odd thoughts in reply. Thanks BTW for article.

1. You don't seem to get what's implicit in the main point - you can't reliably 
work out the sense of an enormous number of words by any kind of word lookup 
whatsoever. How do you actually work out how to "handle the object" - the 
slimy, slippery twisted ropey thing-y, or whatever? By looking at it. By 
looking at images of it - either directly or by entertaining them mentally - 
not consulting any kind of dictionary or word definitions at all. By imagining 
what parts of the object to grip, and how to configure your hands to grip it.

2. This discussion brings up an interesting question. I suspect that there is a 
great deal of selectivity going into what texts NLP chooses to process - and 
that they don't include how-to, instructional texts, like recipe books (and 
most educational texts),  which tell you to do things - like "take a cup," "add 
water etc" - and deal with a real world situation, in-the-world.  (?)  If 
you're dealing more in historical texts, - "the cat sat on the mat" etc - you 
don't have to confront the open-ended nature of words, quite so violently. Hey, 
the cat did some kind of sitting - as long as that's possible, who cares 
exactly what kind it was? But if you're a cool cat told to "sit" on a real mat 
that happens to be full of objects - , and you have to put those instructions 
into deeds rather than more words, - you care, and words' open-endedness 
becomes apparent.

3. While philosophically, intellectually, most people dealing with this area 
may expect words to have precise meanings,  they know practically and 
intuitively that this is impossible and work on the basis that words can have 
different meanings according to who uses them - and that they themselves keep 
shifting their usage of words. Philosophers, for example may argue 
philosophically that words can and should have precise meanings and be treated 
as true or false, but know in practice that pretty well all the major 
words/concepts in philosophy,  like "mind"/"consciousness"/"determinism" - have 
multiple, indeed endless definitions. Or just think about AGI'ers and 
"intelligence."

IOW any general intelligence that wants to successfully use language must have 
a metacognitive/ metalinguistic level of thought  - where it asks explicitly, 
as we do, "what does that word mean?"/"do I like that definition?" / "is it 
reliable?" / "how should I use/order words?" / "what is the best kind of 
diction when talking about this subject?".Life's complicated!

P.S. If you haven't read, I recommend Lakoff's Case Study on "Over" at end of 
"Women, Fire and Dangerous Things" - shows vast number of meanings and schemas 
that can be attached to that word - and amplifies this discussion.

Mike,

An interesting paper on the meanings of words is "I don't believe in word 
senses" by Adam Kilgarriff.  He concludes:


  Following a description of the conflict between WSD [Word Sense 
Disambiguation] and lexicological research, I examined the concept, ‘word 
sense’. It was not found to be sufficiently well defined to be a workable basic 
unit of meaning. I then presented an account of word meaning in which ‘word 
sense’ or ‘lexical unit’ is not a basic unit. Rather, the basic units are 
occurrences of the word in context (operationalised as corpus citations). In 
the simplest case, corpus citations fall into one or more distinct clusters and 
each of these clusters, if large enough and distinct enough from other 
clusters, forms a distinct word sense. But many or most cases are not simple, 
and even for an apparently straightforward common noun with physical objects as 
denotation, handbag, there are a significant number of aberrant citations. The 
interactions between a word’s uses and its senses were explored in some detail. 
The analysis also charted the potential for lexical creativity. The implication 
for WSD is that word senses are only ever defined relative to a set of 
interests. The set of senses defined by a dictionary may or may not match the 
set that is relevant for an NLP [Natural Language Processing] application. The 
scientific study of language should not include word senses as objects in its 
ontology. Where ‘word senses’ have a role to play in a scientific vocabulary, 
they are to be construed as abstractions over clusters of word usages.


  Accordingly, I am attracted to Fluid Construction Grammar in my own work 
because the minimal constituent in that grammar is the construction, which in 
some cases can be a word, but often is not.

  You gave as an example:


  So if I tell you to "handle" an object, or a piece of business, like say 
  "removing a chair from the house" - that word "handle" is open-ended and 
  gives you vast freedom within certain parameters as to how to apply your 
  hand(s) to that object. 


  The utterance Texai, handle removing a chair from the house would, in my 
system, be processed as an imperative construction, parsing out these discourse 
referring objects:

    a.. Texai - the software agent commanded to perform the handling action
    b.. handling action - specifically, the action in which responsibility for 
accomplishing the removing action is accepted

    c.. removing action - the type of removing intended by the author of the 
command

    d.. house - the location of the action
    e.. chair - the item to be removed
    f.. imperative situation - the enclosing utterance situation in which these 
other objects are related

  The Texai system, as envisioned by me to operate, would recognize this 
command as a parametrized task, then either (1) find an existing skill module 
capable of performing the task, or (2) composing a sequence of more primitive 
skills whose combination is capable of performing the task.  

  As you point out, the task may be performed directly by the agent, or 
indirectly by managing the effort of some other agent.  The author of the 
command does not care which alternative is chosen by the commanded agent - 
hence the use of the word "handle" in this construction.

  -Steve

  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  ----- Original Message ----
  From: Mike Tintner <[EMAIL PROTECTED]>
  To: [email protected]
  Sent: Thursday, March 27, 2008 11:04:08 AM
  Subject: Re: [agi] Microsoft Launches Singularity

  John,

  I'm developing this argument more fully elsewhere, so I'll just give a 
  partial gist. What I'm saying - and I stand to be corrected - is that I 
  suspect that literally no one in AI and AGI (and perhaps philosophy) present 
  or past understands the nature of the tools they are using.

  All the tools - all the sign systems currently used - especially language - 
  are actually general-purpose - AS USED BY THE HUMAN BRAIN.

  The whole point of just about every word in language is that it constitutes 
  a general, open brief which can be instantiated in any one of an infinite 
  set of ways.

  So if I tell you to "handle" an object, or a piece of business, like say 
  "removing a chair from the house" - that word "handle" is open-ended and 
  gives you vast freedom within certain parameters as to how to apply your 
  hand(s) to that object. Your hands can be applied to move a given box, for 
  example, in a vast if not infinite range of positions and trajectories. Such 
  a general, open concept is of the essence of general intelligence, because 
  it means that you are immediately ready to adapt to new kinds of situation - 
  if your normal ways of handling boxes are blocked, you are ready to seek out 
  or improvise some strange new contorted two-finger hand position to pick up 
  the box - which also count as "handling". (And you will have actually done a 
  lot of this).

  So what is the "meaning" of "handle"? Well, to be precise, it doesn't have 
  a/one meaning, and isn't meant to - it has a range of possible 
  meanings/references, and you can choose which is most convenient in the 
  circumstances.

  The same principles apply to just about every word in language and every 
  unit of logic and mathematics.

  But - and correct me - I don't think anyone in AI/AGI is using language or 
  any logico-mathematical systems in this general, open-ended way - the way 
  they are actually meant to be used - and the very foundation of General 
  Intelligence.

  Language and the other systems are always used by AGI in specific ways to 
  have specific meanings. YKY, typically, wanted a language for his system 
  which had precise meanings. Even Ben, I suspect, may only employ words in an 
  "open" way, in that their meanings can be changed with experience - but at 
  any given point their meanings will have to be specific.

  To be capable of generalising as the human brain does - and of true AGI - 
  you have to have a brain that simultaneously processes on at least two if 
  not three levels, with two/three different sign systems - including both 
  general and particular ones.



  John:>> Charles: >> I don't think a General Intelligence could be built 
  entirely
  >> out
  >> of
  >> >> narrow AI components, but it might well be a relatively trivial add-
  >> on.
  >> >> Just consider how much of human intelligence is demonstrably "narrow
  >> AI"
  >> >> (well, not artificial, but you know what I mean).  Object
  >> recognition,
  >> >> e.g.  Then start trying to guess how much of the part that we can't
  >> >> prove a classification for is likely to be a narrow intelligence
  >> >> component.  In my estimation (without factual backing) less than
  >> 0.001
  >> >> of our intelligence is General Intellignece, possibly much less.
  >> >> >
  >> >
  >> John:  I agree that it may be <1%. >
  >> >
  >>
  >> Oh boy, does this strike me as absurd. Don't have time for the theory
  >> right
  >> now, but just had to vent. Percentage estimates strike me as a bit
  >> silly,
  >> but if you want to aim for one, why not look at both your paragraphs,
  >> word
  >> by word. "Don't"  "think" "might" "relatively" etc. Now which of those
  >> words
  >> can only be applied to a single type of activity, rather than an open-
  >> ended
  >> set of activities? Which cannot be instantiated in an open-ended if not
  >> infinite set of ways? Which is not a very valuable if not key tool of a
  >> General Intelligence, that can adapt to solve problems across domains?
  >> Language IOW is the central (but not essential) instrument of human
  >> general
  >> intelligence - and I can't think offhand of a single world that is not a
  >> tool for generalising across domains, including "Charles H." and "John
  >> G.".
  >>
  >> In fact, every tool you guys use - logic, maths etc. - is similarly
  >> general
  >> and functions in similar ways. The above strikes me as a 99% failure to
  >> understand the nature of general intelligence.
  >>
  >
  > Mike you are 100% potentially right with a margin of error of 110%. LOL!
  >
  > Seriously Mike how do YOU indicate approximations? And how are you
  > differentiating general and specific? And declaring relative absolutes and
  > convenient infinitudes... I'm trying to understand your argument.
  >
  > John
  >
  > -------------------------------------------
  > agi
  > Archives: http://www.listbox.com/member/archive/303/=now
  > RSS Feed: http://www.listbox.com/member/archive/rss/303/
  > Modify Your Subscription: 
  > http://www.listbox.com/member/?&;
  > Powered by Listbox: http://www.listbox.com
  >
  >
  >
  > -- 
  > No virus found in this incoming message.
  > Checked by AVG.
  > Version: 7.5.519 / Virus Database: 269.22.1/1345 - Release Date: 3/26/2008 
  > 6:50 PM
  >
  > 


  -------------------------------------------
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?&;
  Powered by Listbox: http://www.listbox.com





------------------------------------------------------------------------------
  Never miss a thing. Make Yahoo your homepage.

------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG. 
  Version: 7.5.519 / Virus Database: 269.22.1/1345 - Release Date: 3/26/2008 
6:50 PM

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to