Olivier, all--
I was wondering if anybody knows of a mapping from IPTC Media Topics to
Wikipedia categories?

http://cv.iptc.org/newscodes/mediatopic

I found this question here, but no good answer:
http://answers.semanticweb.com/questions/11457/anyone-published-mappings-between-dbpedia-and-subject-vocabularies-like-iptc-ddc-lcsh

Cheers,
Pablo

On Fri, Jun 8, 2012 at 5:11 PM, Olivier Grisel <[email protected]>wrote:

> 2012/6/8 valentina presutti <[email protected]>:
> > Luca, I am forwarding this to stanbol mailing list, please next time use
> this one.
> > If you're not yet subscribed, please do it :)
>
> Good reflex to use the ML. Here is a copy of my reply:
>
> On 8 June 2012 12:34, Luca Cervone <[email protected]> wrote:
> > Dear Oliver,
> > I'm Luca Cervone from University of Bologna.
> > I will take part to the IKS meeting next week in which I'll present our
> > facebook game developed with the Prof. Valentina Presutti.
> > This game uses the stanbol engines in order to create a resource for the
> > sentiment analysis.
> > We are trying to improve the game and for this reason we are evaluating
> the
> > possibility to use the topic classification engine.
> > So we seen that the issue 197 (the rest API) is still open. Can you,
> kindly,
> >  give us news about the status of the work?
>
> It mostly works but still lacks some web UI and documentation.
>
> > Is it possible to have a developing version of the API so we can install
> it
> > in our personal stanbol instance?
>
> Yes you can build and deploy the enhancer/engines/topic and
> enhancer/topic-web bundles on an stable or full distribution of
> stanbol. The you can use the webconsole to create a configuration for
> a new classification engine and a matching training set (leave the
> solr server parameters empty for a default configuration that should
> work).
>
> Then you can have a look at the topic-web java source for the JAX-RS
> endpoint to guess the HTTP API of the classifier model.
>
> You will need to use this HTTP API to import some possibly
> hierarchical target concepts into the model using either individual
> HTTP POST queries for each concept or a SKOS RDF taxonomy.
>
> Then use the API to upload many categorized text examples for each
> registered concept. Then another call to train the model.
>
> There are some python script that do this for IPTC NewsML files or a
> corpus of categorized articles extracted from DBpedia.
>
>
> http://svn.apache.org/repos/asf/incubator/stanbol/trunk/enhancer/topic-web/tools/
>
> Here is sample training corpus pre-extracted using the dbpediakit tools:
>
>
> https://dl.dropbox.com/u/5743203/IKS/dbpedia/dbpediakit-output/dbpedia-taxonomy.tsv
>
> https://dl.dropbox.com/u/5743203/IKS/dbpedia/dbpediakit-output/dbpedia-examples.tsv.bz2
>
> --
> Olivier
> http://twitter.com/ogrisel - http://github.com/ogrisel
>

Reply via email to