Author: agruber
Date: Wed Sep 14 15:16:02 2011
New Revision: 1170676

URL: http://svn.apache.org/viewvc?rev=1170676&view=rev
Log:
Adding a howto create and work with local indexes and taxnomyengine

Added:
    
incubator/stanbol/site/trunk/content/stanbol/docs/trunk/customvocabulary.mdtext
Modified:
    incubator/stanbol/site/trunk/content/stanbol/docs/trunk/index.mdtext

Added: 
incubator/stanbol/site/trunk/content/stanbol/docs/trunk/customvocabulary.mdtext
URL: 
http://svn.apache.org/viewvc/incubator/stanbol/site/trunk/content/stanbol/docs/trunk/customvocabulary.mdtext?rev=1170676&view=auto
==============================================================================
--- 
incubator/stanbol/site/trunk/content/stanbol/docs/trunk/customvocabulary.mdtext 
(added)
+++ 
incubator/stanbol/site/trunk/content/stanbol/docs/trunk/customvocabulary.mdtext 
Wed Sep 14 15:16:02 2011
@@ -0,0 +1,114 @@
+Title: Using custom/local vocabularies with Apache Stanbol
+
+For text enhancement and linking to external sources, the Entityhub provides 
you with the possibility to work with local indexes of datasets for several 
reasons. Firstly, you do not want to rely on internet connectivity to these 
services, secondly you may want to manage local changes to these public 
repository and thirdly, you may want to work with local resources only, such as 
your LDAP directory or a specific and private enterprise vocabulary of your 
domain.
+
+The main other possibility is to upload ontologies to the ontology manager and 
to use the reasoning components over it.
+
+This document focuses on two cases:
+
+- Creating and using a local SOLr index of a given vocabulary e.g. a SKOS 
thesaurus or taxonomy of your domain
+- Directly working with individual instance entities from given ontologies 
e.g. a FOAF repository.
+
+## Creating and working with local indexes
+
+The ability to work with custom vocabularies in Stanbol is necessary for many 
organizational use cases such as beeing able to detect various types of named 
entities specific to a company or to detect and work with concepts from a 
specific domain. Stanbol provides the machinery to start with vocabularies in 
standard languages such as [SKOS - Simple Knowledge Organization 
Systems](http://www.w3.org/2004/02/skos/) or more general 
[RDF](http://www.w3.org/TR/rdf-primer/) encoded data sets. The respective 
Stanbol components, which are needed for this functionality are the Entityhub 
for creating and managing the index and several [Enhancement 
Engines](engines.html) to make use of the index during the enhancement process.
+
+### Create your own index
+
+**Step 1 : Create the indexing tool**
+
+The indexing tool provides a default configuration for creating a SOLr index 
of RDF files (e.g. a SKOS export of a thesaurus or a set of foaf files).
+
+(1) If not yet built during the Stanbol build process of the entityhub call
+
+    mvn install
+
+in the directory <code>        {root}/entityhub/indexing/genericrdf/</code>and 
than
+
+    mvn assembly:single
+
+Move the generated tool from
+
+    
target/org.apache.stanbol.entityhub.indexing.genericrdf-*-jar-with-dependencies.jar
+
+into a custom directory, where you want to index your files.
+
+
+**Step 2 : Create the index**
+
+Initialize the tool with
+
+    java -jar 
org.apache.stanbol.entityhub.indexing.genericrdf-*-jar-with-dependencies.jar 
init
+
+You will get a directory with the default configuration files, one for the 
sources and a distribution directory for the resulting files. Make sure, that 
you adapt the default configuration with at least the name of your index and 
namespaces and properties you need to include to the index and copy your source 
files into the respective directory <code>indexing/resources/rdfdata</code>. 
Several standard formats for RDF, multiple files and archives of them are 
supported. *For details of possible configurations, please consult the 
<code>{root}/entityhub/indexing/genericrdf/readme.md</code>.*
+
+Then, you can start the index by running
+
+    java -Xmx1024m -jar 
org.apache.stanbol.entityhub.indexing.dblp-*-jar-with-dependencies.jar index
+
+Depending on your hardware and on complexity and size of your sources, it may 
take several hours to built the index. As a result, you will get an archive of 
a [SOLr](http://lucene.apache.org/solr/) index together with an OSGI bundle to 
work with the index in Stanbol.
+
+
+**Step 3 : Initialise the index within Stanbol**
+
+At your running Stanbol instance, copy the ZIP archive into 
<code>{root}/sling/datafiles</code>. Then, at the "Bundles" tab of the 
administration console add and start the 
<code>org.apache.stanbol.data.site.{name}-{version}.jar</code>.
+
+
+### Configuring the enhancement engines
+
+Before you can make use of the custom vocabulary you need to decide, which 
kind of enhancements you want to support. If your enhancements are 
NamedEntities in its more strict sense (Persons, Locations, Organizations), 
then you can may use the standard NER engine together with its 
EntityLinkingEngine to configure the destination of your links.
+
+In such cases, where you want to match all kinds of named entities and 
concepts from your custom vocabulary, you should work with the 
TaxonomyLinkingEngine to both, find occurrences and to link them to custom 
entities. In this case, you'll get only results, if there is a match, while in 
the case above, you even get entities, where you don't find exact links. This 
approach will have its advantages when you need to have a high recall rate on 
your custom entities.
+
+
+In the following the configuration options are described briefly.
+
+**Use the TaxonomyLinkingEngine only**
+
+(1) To make sure, that the enhancement process uses the TaxonomyEngine only, 
deactivate the "standard NLP" enhancement engines, especially the 
NamedEntityExtractionEnhancementEngine (NER) and the EntityLinkingEngine before 
to work with the TaxonomyLinkingEngine.
+
+(2) Open the configuration console at 
http://localhost:8080/system/console/configMgr and navigate to the 
TaxonomyLinkingEngine. Its main options are configurable via the UI.
+
+- Referenced Site: {put the id/name of your index} (required)
+- Label Field: {the property to search for}
+- Use Simple Tokenizer: {deactivate to use language specific tokenizers}
+- Min Token Length: {set minimal token length}
+- Use Chunker: {disable/enable language specific chunkers}
+- Suggestions: {maximum number of suggestions}
+- Number of Required Tokens: {minimal required tokens}
+
+*For further details please on the engine and its configuration please consult 
the according Readme file at TODO: create the readme 
<code>{root}/stanbol/enhancer/engines/taxonomylinking/<code>.*
+       
+
+**Use several instances of the TaxonomyLinkingEngine**
+
+To work at the same time with different instances of the TaxonomyLinkingEngine 
can be useful in cases, where you have two or more distinct custom 
vocabularies/indexes and/or if you want to combine your specific domain 
vocabulary with general purpose datasets such as dbpedia or others.
+
+
+**Use the TaxonomyLinkingEngine together with the NER engine and the 
EntityLinkingEngine**
+
+If your text corpus contains and you are interested in both, generic 
NamedEntities and custom thesaurus you may use   
+
+
+
+### Demos and Examples
+
+- The full demo installation of Stanbol is configured to also work with an 
environmental thesaurus - if you test it with unstructured text from the 
domain, you should get enhancements with additional results for specific 
"concepts".
+- One example can be found with metadata from the Austrian National Library is 
described (TODO: link) here.
+
+(TODO) - Examples
+
+
+## Create a custom index for dbpedia
+
+(TODO) dbpedia indexing (<-- olivier)
+
+
+## Working with ontologies in EntityHub
+
+(TODO)
+
+### Demos and Examples
+
+(TODO)
+

Modified: incubator/stanbol/site/trunk/content/stanbol/docs/trunk/index.mdtext
URL: 
http://svn.apache.org/viewvc/incubator/stanbol/site/trunk/content/stanbol/docs/trunk/index.mdtext?rev=1170676&r1=1170675&r2=1170676&view=diff
==============================================================================
--- incubator/stanbol/site/trunk/content/stanbol/docs/trunk/index.mdtext 
(original)
+++ incubator/stanbol/site/trunk/content/stanbol/docs/trunk/index.mdtext Wed 
Sep 14 15:16:02 2011
@@ -53,7 +53,7 @@ The web interface of your Apache Stanbol
 
  Analyze textual content, enhance with with named entities (person, place, 
organization), suggest links to open data sources.
 
-* Working with "local" Entities
+* [Working with "local" Entities](customvocabulary.html)
 
  Use locally defined entities (e.g. thesaurus concepts) from an organization's 
context.  
 


Reply via email to