Repository: incubator-rya
Updated Branches:
  refs/heads/develop 2a6f73088 -> ce4a10ff5


http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/_index.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/_index.md 
b/extras/rya.manual/src/site/markdown/_index.md
new file mode 100644
index 0000000..184b94f
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/_index.md
@@ -0,0 +1,23 @@
+
+# Rya
+- [Overview](overview.md)
+- [Quick Start](quickstart.md)
+- [Load Data](loaddata.md)
+- [Query Data](querydata.md)
+- [Evaluation Table](eval.md)
+- [Pre-computed Joins](loadPrecomputedJoin.md)
+- [Inferencing](infer.md)
+
+# Samples
+- [Typical First Steps](sm-firststeps.md)
+- [Simple Add/Query/Remove Statements](sm-simpleaqr.md)
+- [Sparql query](sm-sparqlquery.md)
+- [Adding Authentication](sm-addauth.md)
+- [Inferencing](sm-infer.md)
+- [Named Graph](sm-namedgraph.md)
+- [Update data](sm-updatedata.md)
+- [Alx](alx.md)
+
+# Development
+- [Building From Source](build-source.md)
+- [LTS Maven Settings XML](maven-settings.md)

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/alx.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/alx.md 
b/extras/rya.manual/src/site/markdown/alx.md
new file mode 100644
index 0000000..78a4c8e
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/alx.md
@@ -0,0 +1,61 @@
+# Alx Rya Integration
+
+Alx is a modular framework for developing applications. Rya has mechanisms to 
integrate directly into Alx to provide other modules access to queries.
+
+Currently, the Alx Rya extension only allows interacting with an Accumulo 
store.
+
+## Prerequisites
+
+- Alx 1.0.5+ (we will refer to it at the ALX_HOME directory from now on)
+- alx.rya features xml (can be found in maven at 
`mvn:mvm.rya/alx.rya/<version>/xml/features`)
+
+## Steps
+
+1. Start up Alx
+2. features:addurl alx.rya features xml
+3. features:install alx.rya
+4. (optional) features:install alx.rya.console
+
+That's it. To make sure, run `ls <alx.rya bundle id>` and make sure something 
like this pops up:
+
+```
+mvm.rya.alx.rya (99) provides:
+------------------------------
+Bundle-SymbolicName = mvm.rya.alx.rya
+Bundle-Version = 3.0.4.SNAPSHOT
+objectClass = org.osgi.service.cm.ManagedService
+service.id = 226
+service.pid = mvm.rya.alx
+----
+...
+```
+
+## Using
+
+The bundle registers a Sail Repository, so you can interact with it directly 
as in the other code examples. Here is a quick groovy example of the usage:
+
+``` JAVA
+import org.springframework.osgi.extensions.annotation.*;
+import org.openrdf.repository.*;
+import org.openrdf.model.ValueFactory;
+import static mvm.rya.api.RdfCloudTripleStoreConstants.*;
+
+class TstRepo {
+
+       @ServiceReference
+       public void setRepo(Repository repo) {
+               println repo
+               RepositoryConnection conn = repo.getConnection();
+               ValueFactory vf = VALUE_FACTORY;
+        def statements = 
conn.getStatements(vf.createURI("http://www.Department0.University0.edu";), 
null, null, true);
+        while(statements.hasNext()) {
+            System.out.println(statements.next());
+        }
+        statements.close();
+        conn.close();
+       }
+
+}
+```
+
+The bundle also registers a RyaDAO, so you can interact with the RyaDAO 
interface directly

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/build-source.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/build-source.md 
b/extras/rya.manual/src/site/markdown/build-source.md
new file mode 100644
index 0000000..e811622
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/build-source.md
@@ -0,0 +1,15 @@
+# Building from Source
+
+## Prerequisites
+
+* Rya code
+* Maven 2.2 +
+
+## Building
+
+Using Git, pull down the latest code from the url above.
+
+Run the command to build the code `mvn clean install`
+
+If all goes well, here are the artifacts that you will be interested in:
+* Rya-WAR : web/web-rya/target/web.rya.war

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/eval.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/eval.md 
b/extras/rya.manual/src/site/markdown/eval.md
new file mode 100644
index 0000000..8a40389
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/eval.md
@@ -0,0 +1,58 @@
+# Prospects Table
+
+The Prospects Table provides statistics on the number of 
subject/predicate/object data found in the triple store. It is currently a
+Map Reduce job that will run against the Rya store and save all the statistics 
in the prosepcts table.
+
+## Build
+
+[Build the mmrts.git repo](build-source.md)
+
+## Run
+
+Deploy the `extras/rya.prospector/target/rya.prospector-<version>-shade.jar` 
file to the hadoop cluster.
+
+The prospector also requires a configuration file that defines where Accumulo 
is, which Rya table (has to be the SPO table) to read from, and
+which table to output to. (Note: Make sure you follow the same schema as the 
Rya tables (prospects table name: tableprefix_prospects)
+
+A sample configuration file might look like the following:
+
+``` XML
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+    <property>
+        <name>prospector.intable</name>
+        <value>triplestore_spo</value>
+    </property>
+    <property>
+        <name>prospector.outtable</name>
+        <value>triplestore_prospects</value>
+    </property>
+    <property>
+        <name>prospector.auths</name>
+        <value>U,FOUO</value>
+    </property>
+    <property>
+        <name>instance</name>
+        <value>accumulo</value>
+    </property>
+    <property>
+        <name>zookeepers</name>
+        <value>localhost:2181</value>
+    </property>
+    <property>
+        <name>username</name>
+        <value>root</value>
+    </property>
+    <property>
+        <name>password</name>
+        <value>secret</value>
+    </property>
+</configuration>
+```
+
+Run the command, filling in the correct information.
+
+```
+hadoop jar rya.prospector-3.0.4-SNAPSHOT-shade.jar 
mvm.rya.prospector.mr.Prospector /tmp/prospectorConf.xml
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/index.md 
b/extras/rya.manual/src/site/markdown/index.md
new file mode 100644
index 0000000..aa49e3b
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/index.md
@@ -0,0 +1,24 @@
+# Rya
+
+This project contains documentation about the Rya, a scalable RDF triple store 
on top of Accumulo.
+
+- [Overview](overview.md)
+- [Quick Start](quickstart.md)
+- [Load Data](loaddata.md)
+- [Query Data](querydata.md)
+- [Evaluation Table](eval.md)
+- [Pre-computed Joins](loadPrecomputedJoin.md)
+- [Inferencing](infer.md)
+
+# Samples
+- [Typical First Steps](sm-firststeps.md)
+- [Simple Add/Query/Remove Statements](sm-simpleaqr.md)
+- [Sparql query](sm-sparqlquery.md)
+- [Adding Authentication](sm-addauth.md)
+- [Inferencing](sm-infer.md)
+- [Named Graph](sm-namedgraph.md)
+- [Update data](sm-updatedata.md)
+- [Alx](alx.md)
+
+# Development
+- [Building From Source](build-source.md)

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/infer.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/infer.md 
b/extras/rya.manual/src/site/markdown/infer.md
new file mode 100644
index 0000000..ee769c5
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/infer.md
@@ -0,0 +1,14 @@
+# Inferencing
+
+The current inferencing set supported includes:
+
+* rdfs:subClassOf
+* rdfs:subPropertyOf
+* owl:equivalentProperty
+* owl:inverseOf
+* owl:SymmetricProperty
+* owl:TransitiveProperty (* This is implemented, but probably not fully. Still 
in testing)
+
+Nothing special has to be done outside of making sure that the 
RdfCloudTripleStore object has the InferencingEngine object set on it and 
properly configured. This is usually done by default. See the [Query Data 
Section](querydata.md) for a simple example.
+
+Also, the inferencing engine is set to pull down the latest model every 5 
minutes currently (which is configurable). So if you load a new model, a 
previous RepositoryConnection may not pick up these changes into the 
Inferencing Engine yet. Getting the InferencingEngine object from the 
RdfCloudTripleStore and running the `refreshGraph` method can refresh the 
inferred graph immediately.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/loadPrecomputedJoin.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/loadPrecomputedJoin.md 
b/extras/rya.manual/src/site/markdown/loadPrecomputedJoin.md
new file mode 100644
index 0000000..472b409
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/loadPrecomputedJoin.md
@@ -0,0 +1,28 @@
+# Load Pre-computed Join
+
+A tool has been created to load a pre-computed join.  This tool will generate 
an index to support a pre-computed join on a user provided SPARQL query, and 
then register that query within Rya.
+
+
+## Registering a pre-computed join
+
+Generating a pre-computed join is done using Pig to execute a series of Map 
Reduce jobs.  The index (pre-computed join) is associated with a user defined 
SPARQL query.  
+  
+To execute the indexing tool, compile and run 
`mvm.rya.accumulo.pig.IndexWritingTool` 
+with the following seven input arguments: `[hdfsSaveLocation] [sparqlFile] 
[instance] [cbzk] [user] [password] [rdfTablePrefix]`
+
+
+Options:
+
+* hdfsSaveLocation: a working directory on hdfs for storing interim results
+* sparqlFile: the query to generate a precomputed join for
+* instance: the accumulo instance name
+* cbzk: the accumulo zookeeper name
+* user: the accumulo username
+* password:  the accumulo password for the supplied user
+* rdfTablePrefix : The tables (spo, po, osp) are prefixed with this qualifier. 
The tables become: (rdf.tablePrefix)spo,(rdf.tablePrefix)po,(rdf.tablePrefix)osp
+
+
+# Using a Pre-computed Join
+
+An example of using a pre-computed join can be referenced in 
+`mvm.rya.indexing.external.ExternalSailExample`

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/loaddata.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/loaddata.md 
b/extras/rya.manual/src/site/markdown/loaddata.md
new file mode 100644
index 0000000..3a66d6a
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/loaddata.md
@@ -0,0 +1,121 @@
+# Load Data
+
+There are a few mechanisms to load data
+
+## Web REST endpoint
+
+The War sets up a Web REST endpoint at `http://server/web.rya/loadrdf` that 
allows POST data to get loaded into the Rdf Store. This short tutorial will use 
Java code to post data.
+
+First, you will need data to load and will need to figure out what format that 
data is in.
+
+For this sample, we will use the following N-Triples:
+
+```
+<http://mynamespace/ProductType1> 
<http://www.w3.org/1999/02/22-rdf-syntax-ns#type> 
<http://mynamespace/ProductType> .
+<http://mynamespace/ProductType1> <http://www.w3.org/2000/01/rdf-schema#label> 
"Thing" .
+<http://mynamespace/ProductType1> <http://purl.org/dc/elements/1.1/publisher> 
<http://mynamespace/Publisher1> .
+```
+
+Save this file somewhere `$RDF_DATA`
+
+Second, use the following Java code to load data to the REST endpoint:
+
+``` JAVA
+import java.io.BufferedReader;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.net.URL;
+import java.net.URLConnection;
+
+public class LoadDataServletRun {
+
+    public static void main(String[] args) {
+        try {
+            final InputStream resourceAsStream = 
Thread.currentThread().getContextClassLoader()
+                    .getResourceAsStream("$RDF_DATA");
+            URL url = new URL("http://server/web.rya/loadrdf"; +
+                    "?format=N-Triples" +
+                    "");
+            URLConnection urlConnection = url.openConnection();
+            urlConnection.setRequestProperty("Content-Type", "text/plain");
+            urlConnection.setDoOutput(true);
+
+            final OutputStream os = urlConnection.getOutputStream();
+
+            int read;
+            while((read = resourceAsStream.read()) >= 0) {
+                os.write(read);
+            }
+            resourceAsStream.close();
+            os.flush();
+
+            BufferedReader rd = new BufferedReader(new InputStreamReader(
+                    urlConnection.getInputStream()));
+            String line;
+            while ((line = rd.readLine()) != null) {
+                System.out.println(line);
+            }
+            rd.close();
+            os.close();
+        } catch (Exception e) {
+            e.printStackTrace();
+        }
+    }
+}
+```
+
+Compile and run this code above, changing the references for $RDF_DATA and the 
url that your Rdf War is running at.
+
+The default "format" is RDF/XML, but these formats are supported : RDFXML, 
NTRIPLES, TURTLE, N3, TRIX, TRIG.
+
+## Bulk Loading data
+
+Bulk loading data is done through Map Reduce jobs
+
+### Bulk Load RDF data
+
+This Map Reduce job will read a full file into memory and parse it into 
statements. The statements are saved into the store. Here is an example for 
storing in Accumulo:
+
+```
+hadoop jar target/accumulo.rya-3.0.4-SNAPSHOT-shaded.jar 
mvm.rya.accumulo.mr.fileinput.BulkNtripsInputTool -Dac.zk=localhost:2181 
-Dac.instance=accumulo -Dac.username=root -Dac.pwd=secret 
-Drdf.tablePrefix=triplestore_ -Dio.sort.mb=64 /tmp/temp.ntrips
+```
+
+Options:
+
+- rdf.tablePrefix : The tables (spo, po, osp) are prefixed with this 
qualifier. The tables become: 
(rdf.tablePrefix)spo,(rdf.tablePrefix)po,(rdf.tablePrefix)osp
+- ac.* : Accumulo connection parameters
+- rdf.format : See RDFFormat from openrdf, samples include (Trig, N-Triples, 
RDF/XML)
+- io.sort.mb : Higher the value, the faster the job goes. Just remember that 
you will need this much ram at least per mapper
+
+The argument is the directory/file to load. This file needs to be loaded into 
HDFS before running.
+
+## Direct OpenRDF API
+
+Here is some sample code to load data directly through the OpenRDF API. 
(Loading N-Triples data)
+You will need at least `accumulo.rya-<version>`, `rya.api`, `rya.sail.impl` on 
the classpath and transitive dependencies. I find that Maven is the easiest way 
to get a project dependency tree set up.
+
+``` JAVA
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+AccumuloRyaDAO dao = new AccumuloRdfDAO();
+Connector connector = new ZooKeeperInstance("instance", 
"zoo1,zoo2,zoo3").getConnector("user", "password");
+dao.setConnector(connector);
+conf.setTablePrefix("rya_");
+dao.setConf(conf);
+store.setRdfDao(dao);
+
+Repository myRepository = new RyaSailRepository(store);
+myRepository.initialize();
+RepositoryConnection conn = myRepository.getConnection();
+
+//load data from file
+final File file = new File("ntriples.ntrips");
+conn.add(new FileInputStream(file), file.getName(),
+        RDFFormat.NTRIPLES, new Resource[]{});
+
+conn.commit();
+
+conn.close();
+myRepository.shutDown();
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/overview.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/overview.md 
b/extras/rya.manual/src/site/markdown/overview.md
new file mode 100644
index 0000000..546530f
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/overview.md
@@ -0,0 +1,5 @@
+# Overview
+
+RYA is a scalable RDF Store that is built on top of a Columnar Index Store 
(such as Accumulo). It is implemented as an extension to OpenRdf to provide 
easy query mechanisms (SPARQL, SERQL, etc) and Rdf data storage (RDF/XML, 
NTriples, etc).
+
+RYA stands for RDF y(and) Accumulo.

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/querydata.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/querydata.md 
b/extras/rya.manual/src/site/markdown/querydata.md
new file mode 100644
index 0000000..70f3045
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/querydata.md
@@ -0,0 +1,116 @@
+# Query Data
+
+There are a few mechanisms to query data
+
+## Web JSP endpoint
+
+Open a url to `http://server/web.rya/sparqlQuery.jsp`. This simple form can 
run Sparql.
+
+## Web REST endpoint
+
+The War sets up a Web REST endpoint at `http://server/web.rya/queryrdf` that 
allows GET requests with queries.
+
+For this sample, we will assume you already loaded data from the [Load 
Data](loaddata.md) tutorial
+
+Save this file somewhere $RDF_DATA
+
+Second, use the following Java code to load data to the REST endpoint:
+
+``` JAVA
+import java.io.BufferedReader;
+import java.io.InputStreamReader;
+import java.net.URL;
+import java.net.URLConnection;
+import java.net.URLEncoder;
+
+public class QueryDataServletRun {
+
+    public static void main(String[] args) {
+        try {
+            String query = "select * where {\n" +
+                                "<http://mynamespace/ProductType1> ?p ?o.\n" +
+                                "}";
+
+            String queryenc = URLEncoder.encode(query, "UTF-8");
+
+            URL url = new URL("http://server/rdfTripleStore/queryrdf?query="; + 
queryenc);
+            URLConnection urlConnection = url.openConnection();
+            urlConnection.setDoOutput(true);
+
+            BufferedReader rd = new BufferedReader(new InputStreamReader(
+                    urlConnection.getInputStream()));
+            String line;
+            while ((line = rd.readLine()) != null) {
+                System.out.println(line);
+            }
+            rd.close();
+        } catch (Exception e) {
+            e.printStackTrace();
+        }
+    }
+}
+```
+
+Compile and run this code above, changing the url that your Rdf War is running 
at.
+
+## Direct Code
+
+Here is a code snippet for directly running against Accumulo with the code. 
You will need at least accumulo.rya.jar, rya.api, rya.sail.impl on the 
classpath and transitive dependencies. I find that Maven is the easiest way to 
get a project dependency tree set up.
+
+``` JAVA
+Connector connector = new ZooKeeperInstance("instance", 
"zoo1,zoo2,zoo3").getConnector("user", "password");
+
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRyaDAO crdfdao = new AccumuloRyaDAO();
+crdfdao.setConnector(connector);
+
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+conf.setTablePrefix("rts_");
+conf.setDisplayQueryPlan(true);
+crdfdao.setConf(conf);
+store.setRdfDao(crdfdao);
+
+ProspectorServiceEvalStatsDAO evalDao = new 
ProspectorServiceEvalStatsDAO(connector, conf);
+evalDao.init();
+store.setRdfEvalStatsDAO(evalDao);
+
+InferenceEngine inferenceEngine = new InferenceEngine();
+inferenceEngine.setRdfDao(crdfdao);
+inferenceEngine.setConf(conf);
+store.setInferenceEngine(inferenceEngine);
+
+Repository myRepository = new RyaSailRepository(store);
+myRepository.initialize();
+
+String query = "select * where {\n" +
+        "<http://mynamespace/ProductType1> ?p ?o.\n" +
+        "}";
+RepositoryConnection conn = myRepository.getConnection();
+System.out.println(query);
+TupleQuery tupleQuery = conn.prepareTupleQuery(
+        QueryLanguage.SPARQL, query);
+ValueFactory vf = ValueFactoryImpl.getInstance();
+
+TupleQueryResultHandler writer = new SPARQLResultsXMLWriter(System.out);
+tupleQuery.evaluate(new TupleQueryResultHandler() {
+
+    int count = 0;
+
+    @Override
+    public void startQueryResult(List<String> strings) throws 
TupleQueryResultHandlerException {
+    }
+
+    @Override
+    public void endQueryResult() throws TupleQueryResultHandlerException {
+    }
+
+    @Override
+    public void handleSolution(BindingSet bindingSet) throws 
TupleQueryResultHandlerException {
+        System.out.println(bindingSet);
+    }
+});
+
+conn.close();
+myRepository.shutDown();
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/quickstart.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/quickstart.md 
b/extras/rya.manual/src/site/markdown/quickstart.md
new file mode 100644
index 0000000..52bc111
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/quickstart.md
@@ -0,0 +1,41 @@
+# Quick Start
+
+This tutorial will outline the steps needed to get quickly started with the 
Rya store using the web based endpoint.
+
+## Prerequisites
+
+* Columnar Store (Accumulo)
+* Rya code (Git: git://git.apache.org/incubator-rya.git)
+* Maven 3.0 +
+
+## Building from Source
+
+Using Git, pull down the latest code from the url above.
+
+Run the command to build the code `mvn clean install`
+
+If all goes well, the build should be successful and a war should be produced 
in `web/web.rya/target/web.rya.war`
+
+## Deployment Using Tomcat
+
+Unwar the above war into the webapps directory.
+
+To point the web.rya war to the appropriate Accumulo instance, make a 
properties file `environment.properties` and put it in the classpath. Here is 
an example:
+
+```
+instance.name=accumulo  #Accumulo instance name
+instance.zk=localhost:2181  #Accumulo Zookeepers
+instance.username=root  #Accumulo username
+instance.password=secret  #Accumulo pwd
+rya.tableprefix=triplestore_  #Rya Table Prefix
+rya.displayqueryplan=true  #To display the query plan
+```
+
+Start the Tomcat server. `./bin/startup.sh`
+
+## Usage
+
+First, we need to load data. See the [Load Data Section] (loaddata.md)
+
+Second, we need to query that data. See the [Query Data Section](querydata.md)
+

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-addauth.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-addauth.md 
b/extras/rya.manual/src/site/markdown/sm-addauth.md
new file mode 100644
index 0000000..aadef07
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-addauth.md
@@ -0,0 +1,98 @@
+# Add Authentication
+
+This tutorial will give a few examples on how to load and query data with 
authentication.
+
+This is only available for accumulo and Accumulo because they provide the 
security filters necessary to do row level authentication and visibility.
+
+## Load Data with Visibilities
+
+During the Load process, there are a few ways to set the Column Visibility you 
want set on each of the corresponding rdf rows.
+
+### Global Visibility
+
+You can set the Column Visibility globally on the RdfCloudTripleStore, and it 
will use that particular value for every row saved.
+
+To do this, once you create and set up the RdfCloudTripleStore, just set the 
property on the store configuration:
+
+``` JAVA
+//setup
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRyaDAO crdfdao = new AccumuloRyaDAO();
+crdfdao.setConnector(connector);
+
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+conf.setTablePrefix("rts_");
+conf.setDisplayQueryPlan(true);
+
+//set global column Visibility
+conf.setCv("AUTH1|AUTH2");
+
+crdfdao.setConf(conf);
+store.setRdfDao(crdfdao);
+```
+
+The format is simply the same as the Column Visibility format.
+
+### Per triple or document based Visibility
+
+TODO: Not available as of yet
+
+## Query Data with Authentication
+
+Attaching an Authentication to the query process is very simple. It requires 
just adding the property `RdfCloudTripleStoreConfiguration.CONF_QUERY_AUTH` to 
the query `BindingSet`
+Example:
+
+``` JAVA
+//setup
+Connector connector = new ZooKeeperInstance("instance", 
"zoo1,zoo2,zoo3").getConnector("user", "password");
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRyaDAO crdfdao = new AccumuloRyaDAO();
+crdfdao.setConnector(connector);
+
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+conf.setTablePrefix("rts_");
+conf.setDisplayQueryPlan(true);
+crdfdao.setConf(conf);
+//set global column Visibility
+conf.setCv("1|2");
+store.setRdfDao(crdfdao);
+
+InferenceEngine inferenceEngine = new InferenceEngine();
+inferenceEngine.setRdfDao(crdfdao);
+inferenceEngine.setConf(conf);
+store.setInferenceEngine(inferenceEngine);
+
+Repository myRepository = new RyaSailRepository(store);
+myRepository.initialize();
+RepositoryConnection conn = myRepository.getConnection();
+
+//define and add statement
+String litdupsNS = "urn:test:litdups#";
+URI cpu = vf.createURI(litdupsNS, "cpu");
+URI loadPerc = vf.createURI(litdupsNS, "loadPerc");
+URI uri1 = vf.createURI(litdupsNS, "uri1");
+conn.add(cpu, loadPerc, uri1);
+conn.commit();
+
+//query with auth
+String query = "select * where {" +
+                "<" + cpu.toString() + "> ?p ?o1." +
+                "}";
+TupleQuery tupleQuery = conn.prepareTupleQuery(QueryLanguage.SPARQL, query);
+tupleQuery.setBinding(RdfCloudTripleStoreConfiguration.CONF_QUERY_AUTH, 
vf.createLiteral("2"));
+TupleQueryResult result = tupleQuery.evaluate();
+while(result.hasNext()) {
+    System.out.println(result.next());
+}
+result.close();
+
+//close
+conn.close();
+myRepository.shutDown();
+```
+
+Or you can set a global auth using the configuration:
+
+``` JAVA
+conf.setAuth("2")
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-firststeps.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-firststeps.md 
b/extras/rya.manual/src/site/markdown/sm-firststeps.md
new file mode 100644
index 0000000..c08c035
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-firststeps.md
@@ -0,0 +1,59 @@
+# Typical First Steps
+
+In this tutorial, I will give you a quick overview of some of the first steps 
I perform to get data loaded and read for query.
+
+## Prerequisites
+
+ We are assuming Accumulo 1.5+ usage here.
+
+ * Rya Source Code `web.rya.war`)
+ * Accumulo on top of Hadoop 0.20+
+ * RDF Data (in N-Triples format, this format is the easiest to bulk load)
+
+## Building Source
+
+Skip this section if you already have the Map Reduce artifact and the WAR
+
+See the [Build From Source Section](build-source.md) to get the appropriate 
artifacts built
+
+## Load Data
+
+I find that the best way to load the data is through the Bulk Load Map Reduce 
job.
+
+* Save the RDF Data above onto HDFS. From now on we will refer to this 
location as `<RDF_HDFS_LOCATION>`
+* Move the `accumulo.rya-<version>-job.jar` onto the hadoop cluster
+* Bulk load the data. Here is a sample command line:
+
+```
+hadoop jar ../accumulo.rya-2.0.0-SNAPSHOT-job.jar BulkNtripsInputTool 
-Drdf.tablePrefix=lubm_ -Dcb.username=user -Dcb.pwd=cbpwd 
-Dcb.instance=instance -Dcb.zk=zookeeperLocation -Drdf.format=N-Triples 
<RDF_HDFS_LOCATION>
+```
+
+Once the data is loaded, it is actually a good practice to compact your 
tables. You can do this by opening the accumulo shell `shell` and running the 
`compact` command on the generated tables. Remember the generated tables will 
be prefixed by the `rdf.tablePrefix` property you assigned above. The default 
tablePrefix is `rts`.
+
+Here is a sample accumulo shell command:
+
+```
+compact -p lubm_(.*)
+```
+
+See the [Load Data Section](loaddata.md) for more options on loading rdf data
+
+## Run the Statistics Optimizer
+
+For the best query performance, it is recommended to run the Statistics 
Optimizer to create the Evaluation Statistics table. This job will read through 
your data and gather statistics on the distribution of the dataset. This table 
is then queried before query execution to reorder queries based on the data 
distribution.
+
+See the [Evaluation Statistics Table Section](eval.md) on how to do this.
+
+## Query data
+
+I find the easiest way to query is just to use the WAR. Load the WAR into your 
favorite web application container and go to the sparqlQuery.jsp page. Example:
+
+```
+http://localhost:8080/web.rya/sparqlQuery.jsp
+```
+
+This page provides a very simple text box for running queries against the 
store and getting data back. (SPARQL queries)
+
+Remember to update the connection information in the WAR: 
`WEB-INF/spring/spring-accumulo.xml`
+
+See the [Query data section](querydata.md) for more information.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-infer.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-infer.md 
b/extras/rya.manual/src/site/markdown/sm-infer.md
new file mode 100644
index 0000000..712af78
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-infer.md
@@ -0,0 +1,332 @@
+# Inferencing
+
+Rya currently provides simple inferencing. The supported list of inferred 
relationships include:
+
+- rdfs:subClassOf
+- rdfs:subPropertyOf
+- owl:EquivalentProperty
+- owl:inverseOf
+- owl:SymmetricProperty
+- owl:TransitiveProperty (This is currently in beta and will not work for 
every case)
+- owl:sameAs
+
+## Setup
+
+The Inferencing Engine is a scheduled job that runs by default every 5 
minutes, this is configurable, to query the relationships in the store and 
develop the inferred graphs necessary to answer inferencing questions.
+
+This also means that if you load a model into the store, it could take up to 5 
minutes for the inferred relationships to be available.
+
+As usual you will need to set up your `RdfCloudTripleStore` with the correct 
DAO, notice we add an `InferencingEngine` as well to the store. If this is not 
added, then no inferencing will be done on the queries:
+
+``` JAVA
+//setup
+Connector connector = new ZooKeeperInstance("instance", 
"zoo1,zoo2,zoo3").getConnector("user", "password");
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRyaDAO crdfdao = new AccumuloRyaDAO();
+crdfdao.setConnector(connector);
+
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+conf.setTablePrefix("rts_");
+conf.setDisplayQueryPlan(true);
+crdfdao.setConf(conf);
+store.setRdfDao(crdfdao);
+
+ProspectorServiceEvalStatsDAO evalDao = new 
ProspectorServiceEvalStatsDAO(connector, conf);
+evalDao.init();
+store.setRdfEvalStatsDAO(evalDao);
+
+InferenceEngine inferenceEngine = new InferenceEngine();
+inferenceEngine.setRdfDao(crdfdao);
+inferenceEngine.setConf(conf);
+store.setInferenceEngine(inferenceEngine);
+
+Repository myRepository = new RyaSailRepository(store);
+myRepository.initialize();
+RepositoryConnection conn = myRepository.getConnection();
+
+//query code goes here
+
+//close
+conn.close();
+myRepository.shutDown();
+```
+
+## Samples
+
+We will go through some quick samples on loading inferred relationships, 
seeing and diagnosing the query plan, and checking the data
+
+### Rdfs:SubClassOf
+
+First the code, which will load the following subclassof relationship: 
`UndergraduateStudent subclassof Student subclassof Person`. Then we will load 
into the tables three triples defining `UgradA rdf:type UndergraduateStudent, 
StudentB rdf:type Student, PersonC rdf:type Person`
+
+``` JAVA
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "UndergraduateStudent"), 
RDFS.SUBCLASSOF, vf.createURI(litdupsNS, "Student")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "Student"), 
RDFS.SUBCLASSOF, vf.createURI(litdupsNS, "Person")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "UgradA"), RDF.TYPE, 
vf.createURI(litdupsNS, "UndergraduateStudent")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "StudentB"), RDF.TYPE, 
vf.createURI(litdupsNS, "Student")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "PersonC"), RDF.TYPE, 
vf.createURI(litdupsNS, "Person")));
+conn.commit();
+```
+
+Remember that once the model is committed, it may take up to 5 minutes for the 
inferred relationships to be ready. Though you can override this property in 
the `InferencingEngine`.
+
+We shall run the following query:
+
+```
+PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
+PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+PREFIX lit: <urn:test:litdups#>
+select * where {?s rdf:type lit:Person.}
+```
+
+And should get back the following results:
+
+```
+[s=urn:test:litdups#StudentB]
+[s=urn:test:litdups#PersonC]
+[s=urn:test:litdups#UgradA]
+```
+
+#### How it works
+
+Let us look at the query plan:
+
+```
+QueryRoot
+   Projection
+      ProjectionElemList
+         ProjectionElem "s"
+      Join
+         FixedStatementPattern
+            Var (name=79f261ee-e930-4af1-bc09-e637cc0affef)
+            Var (name=c-79f261ee-e930-4af1-bc09-e637cc0affef, 
value=http://www.w3.org/2000/01/rdf-schema#subClassOf)
+            Var (name=-const-2, value=urn:test:litdups#Person, anonymous)
+         DoNotExpandSP
+            Var (name=s)
+            Var (name=-const-1, 
value=http://www.w3.org/1999/02/22-rdf-syntax-ns#type, anonymous)
+            Var (name=79f261ee-e930-4af1-bc09-e637cc0affef)
+```
+
+Basically, we first find out (through the InferencingEngine) what triples have 
subclassof with Person. The InferencingEngine will do the graph analysis to 
find the both Student and UndergraduateStudent are Person classes.
+Then this information is joined with the statement pattern `(?s rdf:type 
?inf)` where `?inf` is the results from the InferencingEngine.
+
+### Rdfs:SubPropertyOf
+
+SubPropertyOf defines that a property can be an instance of another property. 
For example, a `gradDegreeFrom subPropertyOf degreeFrom`.
+
+Also, EquivalentProperty can be thought of as specialized SubPropertyOf 
relationship where if `propA equivalentProperty propB` then that means that 
`propA subPropertyOf propB AND propB subPropertyOf propA`
+
+Sample Code:
+
+``` JAVA
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "undergradDegreeFrom"), 
RDFS.SUBPROPERTYOF, vf.createURI(litdupsNS, "degreeFrom")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "gradDegreeFrom"), 
RDFS.SUBPROPERTYOF, vf.createURI(litdupsNS, "degreeFrom")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "degreeFrom"), 
RDFS.SUBPROPERTYOF, vf.createURI(litdupsNS, "memberOf")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "memberOf"), 
RDFS.SUBPROPERTYOF, vf.createURI(litdupsNS, "associatedWith")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "UgradA"), 
vf.createURI(litdupsNS, "undergradDegreeFrom"), vf.createURI(litdupsNS, 
"Harvard")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "GradB"), 
vf.createURI(litdupsNS, "gradDegreeFrom"), vf.createURI(litdupsNS, "Yale")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "ProfessorC"), 
vf.createURI(litdupsNS, "memberOf"), vf.createURI(litdupsNS, "Harvard")));
+conn.commit();
+```
+
+With query:
+
+```
+PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
+PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+PREFIX lit: <urn:test:litdups#>
+select * where {?s lit:memberOf lit:Harvard.}
+```
+
+Will return results:
+
+```
+[s=urn:test:litdups#UgradA]
+[s=urn:test:litdups#ProfessorC]
+```
+
+Since UgradA has undergraduateDegreeFrom Harvard and ProfessorC is memberOf 
Harvard.
+
+#### How it works
+
+This is very similar to the subClassOf relationship above. Basically the 
InferencingEngine provides what properties are subPropertyOf relationships with 
memberOf, and the second part of the Join checks to see if those properties are 
predicates with object "Harvard".
+
+Query Plan:
+
+```
+QueryRoot
+   Projection
+      ProjectionElemList
+         ProjectionElem "s"
+      Join
+         FixedStatementPattern
+            Var (name=0bad69f3-4769-4293-8318-e828b23dc52a)
+            Var (name=c-0bad69f3-4769-4293-8318-e828b23dc52a, 
value=http://www.w3.org/2000/01/rdf-schema#subPropertyOf)
+            Var (name=-const-1, value=urn:test:litdups#memberOf, anonymous)
+         DoNotExpandSP
+            Var (name=s)
+            Var (name=0bad69f3-4769-4293-8318-e828b23dc52a)
+            Var (name=-const-2, value=urn:test:litdups#Harvard, anonymous)
+```
+
+### InverseOf
+
+InverseOf defines a property that is an inverse relation of another property. 
For example, a student who has a `degreeFrom` a University also means that the 
University `hasAlumnus` student.
+
+Code:
+
+``` JAVA
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "degreeFrom"), 
OWL.INVERSEOF, vf.createURI(litdupsNS, "hasAlumnus")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "UgradA"), 
vf.createURI(litdupsNS, "degreeFrom"), vf.createURI(litdupsNS, "Harvard")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "GradB"), 
vf.createURI(litdupsNS, "degreeFrom"), vf.createURI(litdupsNS, "Harvard")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "Harvard"), 
vf.createURI(litdupsNS, "hasAlumnus"), vf.createURI(litdupsNS, "AlumC")));
+conn.commit();
+```
+
+Query:
+
+```
+PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
+PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+PREFIX lit: <urn:test:litdups#>
+select * where {lit:Harvard lit:hasAlumnus ?s.}
+```
+
+Result:
+
+```
+[s=urn:test:litdups#AlumC]
+[s=urn:test:litdups#GradB]
+[s=urn:test:litdups#UgradA]
+```
+
+#### How it works
+
+The query planner will expand the statement pattern `Harvard hasAlumnus ?s` to 
a Union between `Harvard hasAlumnus ?s. and ?s degreeFrom Harvard`
+
+As a caveat, it is important to note that in general Union queries do not have 
the best performance, so having a property that has an inverseOf and 
subPropertyOf, could cause a query plan that might take long depending on how 
the query planner orders the joins.
+
+Query Plan
+
+```
+QueryRoot
+   Projection
+      ProjectionElemList
+         ProjectionElem "s"
+      InferUnion
+         StatementPattern
+            Var (name=-const-1, value=urn:test:litdups#Harvard, anonymous)
+            Var (name=-const-2, value=urn:test:litdups#hasAlumnus, anonymous)
+            Var (name=s)
+         StatementPattern
+            Var (name=s)
+            Var (name=-const-2, value=urn:test:litdups#degreeFrom)
+            Var (name=-const-1, value=urn:test:litdups#Harvard, anonymous)
+```
+
+### SymmetricProperty
+
+SymmetricProperty defines a relationship where, for example, if Bob is a 
friendOf Jeff, then Jeff is a friendOf Bob. (Hopefully)
+
+Code:
+
+``` JAVA
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "friendOf"), RDF.TYPE, 
OWL.SYMMETRICPROPERTY));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "Bob"), 
vf.createURI(litdupsNS, "friendOf"), vf.createURI(litdupsNS, "Jeff")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "James"), 
vf.createURI(litdupsNS, "friendOf"), vf.createURI(litdupsNS, "Jeff")));
+conn.commit();
+```
+
+Query:
+
+```
+PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
+PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+PREFIX lit: <urn:test:litdups#>
+select * where {?s lit:friendOf lit:Bob.}
+```
+
+Results:
+
+```
+[s=urn:test:litdups#Jeff]
+```
+
+#### How it works
+
+The query planner will recognize that `friendOf` is a SymmetricProperty and 
devise a Union to find the specified relationship and inverse.
+
+Query Plan:
+
+```
+QueryRoot
+   Projection
+      ProjectionElemList
+         ProjectionElem "s"
+      InferUnion
+         StatementPattern
+            Var (name=s)
+            Var (name=-const-1, value=urn:test:litdups#friendOf, anonymous)
+            Var (name=-const-2, value=urn:test:litdups#Bob, anonymous)
+         StatementPattern
+            Var (name=-const-2, value=urn:test:litdups#Bob, anonymous)
+            Var (name=-const-1, value=urn:test:litdups#friendOf, anonymous)
+            Var (name=s)
+```
+
+### TransitiveProperty
+
+TransitiveProperty provides a transitive relationship between resources. For 
example, if Queens is subRegionOf NYC and NYC is subRegionOf NY, then Queens is 
transitively a subRegionOf NY.
+
+Code:
+
+``` JAVA
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "subRegionOf"), RDF.TYPE, 
OWL.TRANSITIVEPROPERTY));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "Queens"), 
vf.createURI(litdupsNS, "subRegionOf"), vf.createURI(litdupsNS, "NYC")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "NYC"), 
vf.createURI(litdupsNS, "subRegionOf"), vf.createURI(litdupsNS, "NY")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "NY"), 
vf.createURI(litdupsNS, "subRegionOf"), vf.createURI(litdupsNS, "US")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "US"), 
vf.createURI(litdupsNS, "subRegionOf"), vf.createURI(litdupsNS, 
"NorthAmerica")));
+conn.add(new StatementImpl(vf.createURI(litdupsNS, "NorthAmerica"), 
vf.createURI(litdupsNS, "subRegionOf"), vf.createURI(litdupsNS, "World")));
+conn.commit();
+```
+
+Query:
+
+```
+PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
+PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
+PREFIX lit: <urn:test:litdups#>
+select * where {?s lit:subRegionOf lit:NorthAmerica.}
+```
+
+Results:
+
+```
+[s=urn:test:litdups#Queens]
+[s=urn:test:litdups#NYC]
+[s=urn:test:litdups#NY]
+[s=urn:test:litdups#US]
+```
+
+#### How it works
+
+The TransitiveProperty relationship works by running recursive queries till 
all the results are returned.
+
+It is important to note that certain TransitiveProperty relationships will not 
work:
+* Open ended property: ?s subRegionOf ?o (At least one of the properties must 
be filled or will be filled as the query gets answered)
+* Closed property: Queens subRegionOf NY (At least one of the properties must 
be empty)
+
+We are working on fixing these issues.
+
+Query Plan:
+
+```
+QueryRoot
+   Projection
+      ProjectionElemList
+         ProjectionElem "s"
+      TransitivePropertySP
+         Var (name=s)
+         Var (name=-const-1, value=urn:test:litdups#subRegionOf, anonymous)
+         Var (name=-const-2, value=urn:test:litdups#NorthAmerica, anonymous)
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-namedgraph.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-namedgraph.md 
b/extras/rya.manual/src/site/markdown/sm-namedgraph.md
new file mode 100644
index 0000000..f869adc
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-namedgraph.md
@@ -0,0 +1,136 @@
+# Named Graphs
+
+Named graphs are supported simply in the Rdf Store in a few ways. OpenRdf 
supports sending `contexts` as each triple is saved.
+
+## Simple Named Graph Load and Query
+
+Here is a very simple example of using the API to Insert data in named graphs 
and querying with Sparql
+
+First we will define a Trig document to load
+Trig document
+
+```
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix swp: <http://www.w3.org/2004/03/trix/swp-1/> .
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix ex: <http://www.example.org/vocabulary#> .
+@prefix : <http://www.example.org/exampleDocument#> .
+:G1 { :Monica ex:name "Monica Murphy" .
+      :Monica ex:homepage <http://www.monicamurphy.org> .
+      :Monica ex:email <mailto:[email protected]> .
+      :Monica ex:hasSkill ex:Management }
+
+:G2 { :Monica rdf:type ex:Person .
+      :Monica ex:hasSkill ex:Programming }
+
+:G4 { :Phobe ex:name "Phobe Buffet" }
+
+:G3 { :G1 swp:assertedBy _:w1 .
+      _:w1 swp:authority :Chris .
+      _:w1 dc:date "2003-10-02"^^xsd:date .
+      :G2 swp:quotedBy _:w2 .
+      :G4 swp:assertedBy _:w2 .
+      _:w2 dc:date "2003-09-03"^^xsd:date .
+      _:w2 swp:authority :Tom .
+      :Chris rdf:type ex:Person .
+      :Chris ex:email <mailto:[email protected]>.
+      :Tom rdf:type ex:Person .
+      :Tom ex:email <mailto:[email protected]>}
+```
+
+We will assume that this file is saved on your classpath somewhere at 
`<TRIG_FILE>`
+
+Load data through API:
+
+``` JAVA
+InputStream stream = 
Thread.currentThread().getContextClassLoader().getResourceAsStream("namedgraphs.trig");
+RepositoryConnection conn = repository.getConnection();
+conn.add(stream, "", RDFFormat.TRIG);
+conn.commit();
+```
+
+Now that the data is loaded we can easily query it. For example, we will query 
to find what `hasSkill` is defined in graph G2, and relate that to someone 
defined in G1.
+
+**Query:**
+
+```
+PREFIX  ex:  <http://www.example.org/exampleDocument#>
+PREFIX  voc:  <http://www.example.org/vocabulary#>
+PREFIX  foaf:  <http://xmlns.com/foaf/0.1/>
+PREFIX  rdfs:  <http://www.w3.org/2000/01/rdf-schema#>
+
+SELECT *
+WHERE
+{
+  GRAPH ex:G1
+  {
+    ?m voc:name ?name ;
+       voc:homepage ?hp .
+  } .
+ GRAPH ex:G2
+  {
+    ?m voc:hasSkill ?skill .
+  } .
+}
+```
+
+**Results:**
+
+```
+[hp=http://www.monicamurphy.org;m=http://www.example.org/exampleDocument#Monica;skill=http://www.example.org/vocabulary#Programming;name="Monica
 Murphy"]
+```
+
+**Here is the Query Plan as well:**
+
+```
+QueryRoot
+   Projection
+      ProjectionElemList
+         ProjectionElem "m"
+         ProjectionElem "name"
+         ProjectionElem "hp"
+         ProjectionElem "skill"
+      Join
+         Join
+            StatementPattern FROM NAMED CONTEXT
+               Var (name=m)
+               Var (name=-const-2, 
value=http://www.example.org/vocabulary#name, anonymous)
+               Var (name=name)
+               Var (name=-const-1, 
value=http://www.example.org/exampleDocument#G1, anonymous)
+            StatementPattern FROM NAMED CONTEXT
+               Var (name=m)
+               Var (name=-const-3, 
value=http://www.example.org/vocabulary#homepage, anonymous)
+               Var (name=hp)
+               Var (name=-const-1, 
value=http://www.example.org/exampleDocument#G1, anonymous)
+         StatementPattern FROM NAMED CONTEXT
+            Var (name=m)
+            Var (name=-const-5, 
value=http://www.example.org/vocabulary#hasSkill, anonymous)
+            Var (name=skill)
+            Var (name=-const-4, 
value=http://www.example.org/exampleDocument#G2, anonymous)
+```
+
+## Inserting named graph data through Sparql
+
+The new Sparql update standard provides another way to insert data, even into 
named graphs.
+
+First the insert update:
+
+```
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+PREFIX ex: <http://example/addresses#>
+INSERT DATA
+{
+    GRAPH ex:G1 {
+        <http://example/book3> dc:title    "A new book" ;
+                               dc:creator  "A.N.Other" .
+    }
+}
+```
+
+To perform this update, it requires different code than querying the data 
directly:
+
+```
+Update update = conn.prepareUpdate(QueryLanguage.SPARQL, insert);
+update.execute();
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-simpleaqr.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-simpleaqr.md 
b/extras/rya.manual/src/site/markdown/sm-simpleaqr.md
new file mode 100644
index 0000000..a509195
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-simpleaqr.md
@@ -0,0 +1,54 @@
+# Simple Add Query and Remove of Statements
+
+This quick tutorial will give a small example on how to add, query, and remove 
statements from Rya
+
+## Code
+
+``` JAVA
+//setup
+Connector connector = new ZooKeeperInstance("instance", 
"zoo1,zoo2,zoo3").getConnector("user", "password");
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRyaDAO crdfdao = new AccumuloRyaDAO();
+crdfdao.setConnector(connector);
+
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+conf.setTablePrefix("rts_");
+conf.setDisplayQueryPlan(true);
+crdfdao.setConf(conf);
+store.setRdfDao(crdfdao);
+
+ProspectorServiceEvalStatsDAO evalDao = new 
ProspectorServiceEvalStatsDAO(connector, conf);
+evalDao.init();
+store.setRdfEvalStatsDAO(evalDao);
+
+InferenceEngine inferenceEngine = new InferenceEngine();
+inferenceEngine.setRdfDao(crdfdao);
+inferenceEngine.setConf(conf);
+store.setInferenceEngine(inferenceEngine);
+
+Repository myRepository = new RyaSailRepository(store);
+myRepository.initialize();
+RepositoryConnection conn = myRepository.getConnection();
+
+//define and add statement
+String litdupsNS = "urn:test:litdups#";
+URI cpu = vf.createURI(litdupsNS, "cpu");
+URI loadPerc = vf.createURI(litdupsNS, "loadPerc");
+URI uri1 = vf.createURI(litdupsNS, "uri1");
+conn.add(cpu, loadPerc, uri1);
+conn.commit();
+
+//query for all statements that have subject=cpu and pred=loadPerc (wildcard 
object)
+RepositoryResult<Statement> result = conn.getStatements(cpu, loadPerc, null, 
true)
+while(result.hasNext()) {
+    System.out.println(result.next());
+}
+result.close();
+
+//remove statement
+conn.remove(cpu, loadPerc, uri1);
+
+//close
+conn.close();
+myRepository.shutDown();
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-sparqlquery.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-sparqlquery.md 
b/extras/rya.manual/src/site/markdown/sm-sparqlquery.md
new file mode 100644
index 0000000..cff732a
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-sparqlquery.md
@@ -0,0 +1,58 @@
+# Simple Add Query and Remove of Statements
+
+This quick tutorial will give a small example on how to query data with SPARQL
+
+## Code
+
+``` JAVA
+//setup
+Connector connector = new ZooKeeperInstance("instance", 
"zoo1,zoo2,zoo3").getConnector("user", "password");
+final RdfCloudTripleStore store = new RdfCloudTripleStore();
+AccumuloRyaDAO crdfdao = new AccumuloRyaDAO();
+crdfdao.setConnector(connector);
+
+AccumuloRdfConfiguration conf = new AccumuloRdfConfiguration();
+conf.setTablePrefix("rts_");
+conf.setDisplayQueryPlan(true);
+crdfdao.setConf(conf);
+store.setRdfDao(crdfdao);
+
+ProspectorServiceEvalStatsDAO evalDao = new 
ProspectorServiceEvalStatsDAO(connector, conf);
+evalDao.init();
+store.setRdfEvalStatsDAO(evalDao);
+
+InferenceEngine inferenceEngine = new InferenceEngine();
+inferenceEngine.setRdfDao(crdfdao);
+inferenceEngine.setConf(conf);
+store.setInferenceEngine(inferenceEngine);
+
+Repository myRepository = new RyaSailRepository(store);
+myRepository.initialize();
+RepositoryConnection conn = myRepository.getConnection();
+
+//define and add statements
+String litdupsNS = "urn:test:litdups#";
+URI cpu = vf.createURI(litdupsNS, "cpu");
+URI loadPerc = vf.createURI(litdupsNS, "loadPerc");
+URI uri1 = vf.createURI(litdupsNS, "uri1");
+URI pred2 = vf.createURI(litdupsNS, "pred2");
+URI uri2 = vf.createURI(litdupsNS, "uri2");
+conn.add(cpu, loadPerc, uri1);
+conn.commit();
+
+//query using sparql
+String query = "select * where {" +
+                "?x <" + loadPerc.stringValue() + "> ?o1." +
+                "?x <" + pred2.stringValue() + "> ?o2." +
+                "}";
+TupleQuery tupleQuery = conn.prepareTupleQuery(QueryLanguage.SPARQL, query);
+TupleQueryResult result = tupleQuery.evaluate();
+while(result.hasNext()) {
+      System.out.println(result.next());
+}
+result.close();
+
+//close
+conn.close();
+myRepository.shutDown();
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/markdown/sm-updatedata.md
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/markdown/sm-updatedata.md 
b/extras/rya.manual/src/site/markdown/sm-updatedata.md
new file mode 100644
index 0000000..7227f54
--- /dev/null
+++ b/extras/rya.manual/src/site/markdown/sm-updatedata.md
@@ -0,0 +1,62 @@
+# Sparql Update
+
+OpenRDF supports the Sparql Update functionality. Here are a few samples:
+
+Remember, you have to use `RepositoryConnection.prepareUpdate(..)` to perform 
these queries
+
+**Insert:**
+
+```
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+INSERT DATA
+{ <http://example/book3> dc:title    "A new book" ;
+                         dc:creator  "A.N.Other" .
+}
+```
+
+**Delete:**
+
+```
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+DELETE DATA
+{ <http://example/book3> dc:title    "A new book" ;
+                         dc:creator  "A.N.Other" .
+}
+```
+
+**Update:**
+
+```
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+DELETE { ?book dc:title ?title }
+INSERT { ?book dc:title "A newer book".         ?book dc:add "Additional Info" 
}
+WHERE
+  { ?book dc:creator "A.N.Other" .
+  }
+```
+
+**Insert Named Graph:**
+
+```
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+PREFIX ex: <http://example/addresses#>
+INSERT DATA
+{ GRAPH ex:G1 {
+<http://example/book3> dc:title    "A new book" ;
+                         dc:creator  "A.N.Other" .
+}
+}
+```
+
+**Update Named Graph:**
+
+```
+PREFIX dc: <http://purl.org/dc/elements/1.1/>
+WITH <http://example/addresses#G1>
+DELETE { ?book dc:title ?title }
+INSERT { ?book dc:title "A newer book".
+         ?book dc:add "Additional Info" }
+WHERE
+  { ?book dc:creator "A.N.Other" .
+  }
+```

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/resources/js/fixmarkdownlinks.js
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/resources/js/fixmarkdownlinks.js 
b/extras/rya.manual/src/site/resources/js/fixmarkdownlinks.js
new file mode 100644
index 0000000..484c5d3
--- /dev/null
+++ b/extras/rya.manual/src/site/resources/js/fixmarkdownlinks.js
@@ -0,0 +1,6 @@
+window.onload = function() {
+    var anchors = document.getElementsByTagName("a");
+        for (var i = 0; i < anchors.length; i++) {
+            anchors[i].href = anchors[i].href.replace(/\.md$/,'\.html');
+        }
+    }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-rya/blob/ce4a10ff/extras/rya.manual/src/site/site.xml
----------------------------------------------------------------------
diff --git a/extras/rya.manual/src/site/site.xml 
b/extras/rya.manual/src/site/site.xml
new file mode 100644
index 0000000..19078a0
--- /dev/null
+++ b/extras/rya.manual/src/site/site.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<project name="Maven">
+  <skin>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-fluido-skin</artifactId>
+    <version>1.4</version>
+  </skin>
+  <custom>
+    <fluidoSkin>
+      <topBarEnabled>false</topBarEnabled>
+      <sideBarEnabled>true</sideBarEnabled>
+     <sourceLineNumbersEnabled>false</sourceLineNumbersEnabled>
+    </fluidoSkin>
+  </custom>
+  
+  <body>
+    <head>
+        <script src="./js/fixmarkdownlinks.js" 
type="text/javascript"></script>  
+    </head>
+    <menu name="Rya">
+        <item name="Overview" href="overview.html"/>
+        <item name="Quick Start" href="quickstart.html"/>
+        <item name="Load Data" href="loaddata.html"/>
+        <item name="Query Data" href="querydata.html"/>
+        <item name="Evaluation Table" href="eval.html"/>
+        <item name="Pre-computed Joins" href="loadPrecomputedJoin.html"/>
+        <item name="Inferencing" href="infer.html"/>
+    </menu>
+
+    <menu name="Samples">
+        <item name="Typical First Steps" href="sm-firststeps.html"/>
+        <item name="Simple Add/Query/Remove Statements" 
href="sm-simpleaqr.html"/>
+        <item name="Sparql query" href="sm-sparqlquery.html"/>
+        <item name="Adding Authentication" href="sm-addauth.html"/>
+        <item name="Inferencing" href="sm-infer.html"/>
+        <item name="Named Graph" href="sm-namedgraph.html"/>
+        <item name="Update data" href="sm-updatedata.html"/>
+        <item name="Alx" href="alx.html"/>
+    </menu>
+
+    <menu name="Development">
+        <item name="Building From Source" href="build-source.html"/>
+    </menu>
+  </body>
+</project>
\ No newline at end of file

Reply via email to