Matt: Semantic models learn associations by proximity in the training text. The
degree to which you associate "snake" and "rope" depends on how often these
words appear near each other

Correct me - but it's the old, old problem here, isn't it? Those semantic models/programs won't be able to form any *new* analogies, will they? Or understand newly minted analogies in texts? And I'm v. dubious about their powers to even form valid associations of much value in the ways you describe from existing texts.

You're saying that there's a semantic model/program that can answer, if asked,:

"yes - 'snake, chain, rope, spaghetti strand' is a legitimate/ valid series of associations"/ "yes, they fit together" (based on previous textual analysis) ?

or: " the odd one out in 'snake/ chain/ cigarette/ rope"  is 'cigarette'"?

I have yet to find or be given a single useful analogy drawn by computers (despite asking many times). The only kind of analogy I can remember here is Ed, I think, pointing to Hofstader's analogies along the lines of "xxyy" is like "xxxxyyyy". Not exactly a big deal. No doubt there must be more, but my impression is that in general computers are still pathetic here.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71683316-d0bd3c

Reply via email to