This is one techniques that can be applied to derive meaning from text.  But 
the first step is to have the capacity to fluidly identify chunks of text that 
give context to individual words.  Parts of speech (noun, very, subject and 
object phrases).  In my work I lean towards least-energy metrics (somewhat 
captured in LSA's association angles).  The point isn't having the perfect 
system.  Name one part of xtalk that is perfect!  The point is having some 
semantically applicable tools at all.  Again I must stress, these words 
obligated no one to any action of any kind.  They are simply one person's 
opinion.  
randall

-----Original Message-----
From: John Vokey <[email protected]>
Sent: Saturday, August 22, 2009 7:05 PM
To: [email protected]
Subject: Re: Syllabic division of words

Something close to what I think Randall was talking about is LSA  
(Latent Semantic Analysis).  Indeed, rumour has long held that it is  
the basis of Apple's junk filter in Safari.  The original source for  
the work is here:
<http://lsa.colorado.edu/>

On 22-Aug-09, at 6:47 PM, [email protected] wrote:

> Randall,
>
> OK, well let's carry on from there.
>
> Are you familiar with this project?
> http://www.research.sun.com/knowledge/papers.html
>
> Something similar in xTalk would be pretty darn cool, I would think.
> Especially if it were to index and search the web =).

--
Please avoid sending me Word or PowerPoint attachments.
See <http://www.gnu.org/philosophy/no-word-attachments.html>




_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution


_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to