Hi Pajolma,

As far as I know there are no separate evaluations out of the box, but you
could use the milne-witten corpus to evaluate only the spottter and
disambiguation separately.

In my experience problems are usually related to spotting: surface forms
which are not in the models, surface forms without enough probability.

There is also specific corpus for evaluating disambiguation (kore50)



On Tue, Jun 2, 2015 at 1:58 PM, Pajolma Rupi <[email protected]> wrote:

> Dear all,
>
> I was not able to find some information regarding the time performance of
> Spotlight service for each of the phases (separately): phrase spotting
> (candidate generation, candidate selection), disambiguation, indexing.There
> are some numbers present in the paper "*Improving efficiency and accuracy
> in multilingual entity extraction*" but they are calculated in the
> context of all the annotation process, meanwhile I'm interested in knowing
> during which specific phase the service performs better and during which
> phase it performs worse.
>
> Could you please let me know if such information exists already?
> I would also be interested in knowing if I can produce such information by
> running my own local instance of Spotlight (I'm using Java in order to
> annotate text).
>
> Thank you in advance,
> Pajolma
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Dbp-spotlight-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/dbp-spotlight-users
>
>
------------------------------------------------------------------------------
_______________________________________________
Dbp-spotlight-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dbp-spotlight-users

Reply via email to