Hi Eric,

>         Thanks Olga, BUT I don't see how Spearman's (or other) rank
>         correlation formula measures overlap between 2 ranked lists
>         containing some different words.

in a recent work, I had a similar evaluation problem, yet not exactly the
same. I had gold standard rankings of locations given an input object
(crowdsourced asking the question "where is object X it likely ti be
found?"). I then had a system produce similar rankings on the same set of
locations given an object. In theory the sets of locations are the same,
but in practice the system doesn't have perfect recall, so its list is
shorter, the overlap is not total between the two lists. In my case, I
"solved" this problem by adding the missing items at the bottom of the
ranking. This could be a quick fix for your problem too: just add the items
of list A that are missing from list B at the bottom of the ranking for A
and vice versa. not really elegant, admittedly.

The problem with a ranking correlation metric like Kendall's Tau, in my
case, is that I needed to give more importance to guessing correctly the
top of then list, and less importance to the bottom. My understanding is
that your evaluation should have this feature too. I ended up using nDCG (
https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG). I
found this metric accurate, in that it reflects my intuitions on my data,
but not very intuitive to interpret. So, in that work, I also measured
precision at 1 and at 3, that is, the number of time my system guessed the
first items and the top 3 items correctly, respectively.
_______________________________________________
UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora
Corpora mailing list
Corpora@uib.no
http://mailman.uib.no/listinfo/corpora

Reply via email to