It's true, a "word sense" is not a crisp thing like a part-of-speech
... it's more of a cluster among usage-instances...

Yet, this kind of fuzzy, cluster-type category does play an important
role in cognition, no?

ben g

2008/3/27 Stephen Reed <[EMAIL PROTECTED]>:
>
> Mike,
>
> An interesting paper on the meanings of words is "I don't believe in word
> senses" by Adam Kilgarriff.  He concludes:
>
> Following a description of the conflict between WSD [Word Sense
> Disambiguation] and lexicological research, I examined the concept, 'word
> sense'. It was not found to be sufficiently well defined to be a workable
> basic unit of meaning. I then presented an account of word meaning in which
> 'word sense' or 'lexical unit' is not a basic unit. Rather, the basic units
> are occurrences of the word in context (operationalised as corpus
> citations). In the simplest case, corpus citations fall into one or more
> distinct clusters and each of these clusters, if large enough and distinct
> enough from other clusters, forms a distinct word sense. But many or most
> cases are not simple, and even for an apparently straightforward common noun
> with physical objects as denotation, handbag, there are a significant number
> of aberrant citations. The interactions between a word's uses and its senses
> were explored in some detail. The analysis also charted the potential for
> lexical creativity. The implication for WSD is that word senses are only
> ever defined relative to a set of interests. The set of senses defined by a
> dictionary may or may not match the set that is relevant for an NLP [Natural
> Language Processing] application. The scientific study of language should
> not include word senses as objects in its ontology. Where 'word senses' have
> a role to play in a scientific vocabulary, they are to be construed as
> abstractions over clusters of word usages.
>
> Accordingly, I am attracted to Fluid Construction Grammar in my own work
> because the minimal constituent in that grammar is the construction, which
> in some cases can be a word, but often is not.
>
> You gave as an example:
>
>
> So if I tell you to "handle" an object, or a piece of business, like say
> "removing a chair from the house" - that word "handle" is open-ended and
> gives you vast freedom within certain parameters as to how to apply your
> hand(s) to that object.
>  The utterance Texai, handle removing a chair from the house would, in my
> system, be processed as an imperative construction, parsing out these
> discourse referring objects:
>
> Texai - the software agent commanded to perform the handling action
> handling action - specifically, the action in which responsibility for
> accomplishing the removing action is accepted
> removing action - the type of removing intended by the author of the command
> house - the location of the action
> chair - the item to be removed
> imperative situation - the enclosing utterance situation in which these
> other objects are related
> The Texai system, as envisioned by me to operate, would recognize this
> command as a parametrized task, then either (1) find an existing skill
> module capable of performing the task, or (2) composing a sequence of more
> primitive skills whose combination is capable of performing the task.
>
> As you point out, the task may be performed directly by the agent, or
> indirectly by managing the effort of some other agent.  The author of the
> command does not care which alternative is chosen by the commanded agent -
> hence the use of the word "handle" in this construction.
>
> -Steve
>
> Stephen L. Reed
>
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
>
> ----- Original Message ----
> From: Mike Tintner <[EMAIL PROTECTED]>
> To: [email protected]
> Sent: Thursday, March 27, 2008 11:04:08 AM
> Subject: Re: [agi] Microsoft Launches Singularity
>
>  John,
>
> I'm developing this argument more fully elsewhere, so I'll just give a
> partial gist. What I'm saying - and I stand to be corrected - is that I
> suspect that literally no one in AI and AGI (and perhaps philosophy) present
> or past understands the nature of the tools they are using.
>
> All the tools - all the sign systems currently used - especially language -
> are actually general-purpose - AS USED BY THE HUMAN BRAIN.
>
> The whole point of just about every word in language is that it constitutes
> a general, open brief which can be instantiated in any one of an infinite
> set of ways.
>
> So if I tell you to "handle" an object, or a piece of business, like say
> "removing a chair from the house" - that word "handle" is open-ended and
> gives you vast freedom within certain parameters as to how to apply your
> hand(s) to that object. Your hands can be applied to move a given box, for
> example, in a vast if not infinite range of positions and trajectories. Such
> a general, open concept is of the essence of general intelligence, because
> it means that you are immediately ready to adapt to new kinds of situation -
> if your normal ways of handling boxes are blocked, you are ready to seek out
> or improvise some strange new contorted two-finger hand position to pick up
> the box - which also count as "handling". (And you will have actually done a
> lot of this).
>
> So what is the "meaning" of "handle"? Well, to be precise, it doesn't have
> a/one meaning, and isn't meant to - it has a range of possible
> meanings/references, and you can choose which is most convenient in the
> circumstances.
>
>
> The same principles apply to just about every word in language and every
> unit of logic and mathematics.
>
> But - and correct me - I don't think anyone in AI/AGI is using language or
> any logico-mathematical systems in this general, open-ended way - the way
> they are actually meant to be used - and the very foundation of General
> Intelligence.
>
> Language and the other systems are always used by AGI in specific ways to
> have specific meanings. YKY, typically, wanted a language for his system
> which had precise meanings. Even Ben, I suspect, may only employ words in an
> "open" way, in that their meanings can be changed with experience - but at
> any given point their meanings will have to be specific.
>
> To be capable of generalising as the human brain does - and of true AGI -
> you have to have a brain that simultaneously processes on at least two if
> not three levels, with two/three different sign systems - including both
> general and particular ones.
>
>
>
> John:>> Charles: >> I don't think a General Intelligence could be built
> entirely
> >> out
> >> of
> >> >> narrow AI components, but it might well be a relatively trivial add-
> >> on.
> >> >> Just consider how much of human intelligence is demonstrably "narrow
> >> AI"
> >> >> (well, not artificial, but you know what I mean).  Object
> >> recognition,
> >> >> e.g.  Then start trying to guess how much of the part that we can't
>
>
> >> >> prove a classification for is likely to be a narrow intelligence
> >> >> component.  In my estimation (without factual backing) less than
> >> 0.001
> >> >> of our intelligence is General Intellignece, possibly much less.
> >> >> >
> >> >
> >> John:  I agree that it may be <1%. >
> >> >
> >>
> >> Oh boy, does this strike me as absurd. Don't have time for the theory
> >> right
> >> now, but just had to vent. Percentage estimates strike me as a bit
> >> silly,
> >> but if you want to aim for one, why not look at both your paragraphs,
> >> word
> >> by word. "Don't"  "think" "might" "relatively" etc. Now which of those
> >> words
> >> can only be applied to a single type of activity, rather than an open-
> >> ended
> >> set of activities? Which cannot be instantiated in an open-ended if not
> >> infinite set of ways? Which is not a very valuable if not key tool of a
> >> General Intelligence, that can adapt to solve problems across domains?
> >> Language IOW is the central (but not essential) instrument of human
> >> general
> >> intelligence - and I can't think offhand of a single world that is not a
> >> tool for generalising across domains, including "Charles H." and "John
> >> G.".
> >>
> >> In fact, every tool you guys use - logic, maths etc. - is similarly
> >> general
> >> and functions in similar ways. The above strikes me as a 99% failure to
> >> understand the nature of general intelligence.
> >>
> >
> > Mike you are 100% potentially right with a margin of error of 110%. LOL!
> >
> > Seriously Mike how do YOU indicate approximations? And how are you
> > differentiating general and specific? And declaring relative absolutes and
> > convenient infinitudes... I'm trying to understand your argument.
> >
> > John
> >
> > -------------------------------------------
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription:
> > http://www.listbox.com/member/?&;
>
> > Powered by Listbox: http://www.listbox.com
> >
> >
> >
> > --
> > No virus found in this incoming message.
> > Checked by AVG.
> > Version: 7.5.519 / Virus Database: 269.22.1/1345 - Release Date: 3/26/2008
> > 6:50 PM
> >
> >
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>  ________________________________
> Never miss a thing. Make Yahoo your homepage.
>  ________________________________
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to