Author: kamaci
Date: Sat Oct 24 23:11:16 2015
New Revision: 1710398
URL: http://svn.apache.org/viewvc?rev=1710398&view=rev
Log:
GoraSparkEngine tutorial improved.
Modified:
gora/site/trunk/content/current/tutorial.md
Modified: gora/site/trunk/content/current/tutorial.md
URL:
http://svn.apache.org/viewvc/gora/site/trunk/content/current/tutorial.md?rev=1710398&r1=1710397&r2=1710398&view=diff
==============================================================================
--- gora/site/trunk/content/current/tutorial.md (original)
+++ gora/site/trunk/content/current/tutorial.md Sat Oct 24 23:11:16 2015
@@ -980,7 +980,7 @@ Log analytics example will be implemente
Data will be read from Hbase, map/reduce methods will be run and result will
be written into Solr (version: 4.10.3).
All the process will be done over Spark.
-Persist data into Hbase as described at [Log analytics in
MapReduce](/current/tutorial.html#log-analytics-in-mapreduce)
+Persist data into Hbase as described at [Log analytics in
MapReduce](/current/tutorial.html#log-analytics-in-mapreduce).
To write result into Solr, create a schemaless core named as Metrics. To do it
easily, you can rename default core of collection1 to Metrics which is at
`solr-4.10.3/example/example-schemaless/solr` folder and edit
`solr-4.10.3/example/example-schemaless/solr/Metrics/core.properties` as
follows:
@@ -1036,7 +1036,7 @@ Here are the functions of map and reduce
/** The number of milliseconds in a day */
private static final long DAY_MILIS = 1000 * 60 * 60 * 24;
-
+
/**
* map function used in calculation
*/
@@ -1096,7 +1096,7 @@ When you want to persist result into out
Configuration sparkHadoopConf =
goraSparkEngine.generateOutputConf(outStore);
reducedGoraRdd.saveAsNewAPIHadoopDataset(sparkHadoopConf);
-Thatâs all! You can check Solr to verify the results.
+Thatâs all! You can check Solr to verify the result.
##More Examples
Other than this tutorial, there are several places that you can find