On Wed, May 17, 2017 at 11:50 AM, Ben Goertzel <[email protected]> wrote:

> > Anyway, I don't see anything in that paper that is worth saving. It old
> > crap, we've been doing better for years, Rohit demonstrated that.
>
> Well, we have actually not demonstrated better results than those
> Stanford guys on word sense disambiguation or unsupervised
> part-of-speech learning....  Maybe we can get better results than them
> using the stuff you and Rohit were doing, I dunno...   i kinda doubt
> it, but that's an empirical question...
>

If by "part of speech" you mean "average vertex degree", then yes, .. but
we've figured out one reason for the bad data is that wikipedia doesn't
have any verbs in it.  I'm hoping that parsing project-gutenberg adventure
novels will fix this .. except that I just experienced big data loss, see
other email.

I'm vaguely  thinking of buying a pair of terrabyte SSD's because
processing is definitely bottlenecked in disk i/o but the prices for those
disks remain expensive. I'm also concerned that the very high write volume
might burn them out in a year.

>
> My own experience and intuition is that agglomerative clustering is
> crude and works pretty badly, and I think these NN techniques can do
> better...
>
OK.

Have you done agglomerative clustering in these super-sparse,
high-dimensional spaces?


>
> But we don't need to argue about this stuff....  I mean, the beauty of
> this sort of work is that one has data and one can try different
> algorithms and see what the results are like.


Yes OK.


>  You've done this
> excellent work building the first-phase MST parses,


Caution about terminology: the MST parses are discarded immediately after
they are created: the only thing that is saved are the counts of how often
the word-disjunct pairs occur.

--linas

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA348N8VHiV0hgm8MPbke_qD9-%3D9UDqhejqU1fDUu%2B7TUow%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to