[livedoc] Fix several minor typos detected by github.com/client9/misspell

Closes #457


Project: http://git-wip-us.apache.org/repos/asf/predictionio/repo
Commit: http://git-wip-us.apache.org/repos/asf/predictionio/commit/54415e10
Tree: http://git-wip-us.apache.org/repos/asf/predictionio/tree/54415e10
Diff: http://git-wip-us.apache.org/repos/asf/predictionio/diff/54415e10

Branch: refs/heads/master
Commit: 54415e1066ae2d646eef62ebfdf801ace1de2097
Parents: 65b2fa4
Author: Kazuhiro Sera <[email protected]>
Authored: Sat Aug 11 09:15:49 2018 +0900
Committer: Naoki Takezoe <[email protected]>
Committed: Sat Aug 11 09:16:20 2018 +0900

----------------------------------------------------------------------
 docs/manual/source/deploy/index.html.md                 |  2 +-
 docs/manual/source/evaluation/metricbuild.html.md       |  4 ++--
 docs/manual/source/evaluation/metricchoose.html.md      | 12 ++++++------
 docs/manual/source/evaluation/paramtuning.html.md       |  6 +++---
 docs/manual/source/gallery/templates.yaml               |  4 ++--
 docs/manual/source/resources/faq.html.md                |  2 +-
 .../templates/complementarypurchase/dase.html.md.erb    |  6 +++---
 .../complementarypurchase/quickstart.html.md.erb        |  2 +-
 .../templates/ecommercerecommendation/dase.html.md.erb  |  2 +-
 .../ecommercerecommendation/quickstart.html.md.erb      |  2 +-
 .../source/templates/leadscoring/dase.html.md.erb       |  2 +-
 .../templates/recommendation/batch-evaluator.html.md    |  2 +-
 .../templates/recommendation/evaluation.html.md.erb     |  2 +-
 .../templates/recommendation/quickstart.html.md.erb     |  2 +-
 .../source/templates/similarproduct/dase.html.md.erb    |  2 +-
 .../similarproduct/multi-events-multi-algos.html.md.erb |  4 ++--
 docs/manual/source/tryit/index.html.slim                |  6 +++---
 17 files changed, 31 insertions(+), 31 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/deploy/index.html.md
----------------------------------------------------------------------
diff --git a/docs/manual/source/deploy/index.html.md 
b/docs/manual/source/deploy/index.html.md
index b6c6964..99aceec 100644
--- a/docs/manual/source/deploy/index.html.md
+++ b/docs/manual/source/deploy/index.html.md
@@ -94,5 +94,5 @@ The down time is usually not more than a few seconds though 
it can be more.
 The last thing to do is to add this to your *crontab*:
 
 ```
-0 0 * * *   /path/to/script >/dev/null 2>/dev/null # mute both stdout and 
stderr to supress email sent from cron
+0 0 * * *   /path/to/script >/dev/null 2>/dev/null # mute both stdout and 
stderr to suppress email sent from cron
 ```

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/evaluation/metricbuild.html.md
----------------------------------------------------------------------
diff --git a/docs/manual/source/evaluation/metricbuild.html.md 
b/docs/manual/source/evaluation/metricbuild.html.md
index e9f5875..7560a28 100644
--- a/docs/manual/source/evaluation/metricbuild.html.md
+++ b/docs/manual/source/evaluation/metricbuild.html.md
@@ -30,7 +30,7 @@ A simplistic form of metric is a function which takes a
 `(Query, PredictedResult, ActualResult)`-tuple (*QPA-tuple*) as input
 and return a score.
 Exploiting this properties allows us to implement custom metric with a single
-line of code (plus some boilerplates). We demonstate this with two metrics:
+line of code (plus some boilerplates). We demonstrate this with two metrics:
 accuracy and precision.
 
 <!--
@@ -101,7 +101,7 @@ Lines 3 to 4 is the method signature of `calcuate` method. 
The key difference
 is that the return value is a `Option[Double]`, in contrast to `Double` for
 `AverageMetric`. This class only computes the average of `Some(.)` results.
 Lines 5 to 13 are the actual logic. The first `if` factors out the
-positively predicted case, and the computation is simliar to the accuracy
+positively predicted case, and the computation is similiar to the accuracy
 metric. The negatively predicted case are the *don't cares*, which we return
 `None`.
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/evaluation/metricchoose.html.md
----------------------------------------------------------------------
diff --git a/docs/manual/source/evaluation/metricchoose.html.md 
b/docs/manual/source/evaluation/metricchoose.html.md
index 90f478e..3cde2ec 100644
--- a/docs/manual/source/evaluation/metricchoose.html.md
+++ b/docs/manual/source/evaluation/metricchoose.html.md
@@ -22,10 +22,10 @@ limitations under the License.
 The [hyperparameter tuning module](/evaluation/paramtuning/) allows us to 
select
 the optimal engine parameter defined by a `Metric`.
 `Metric` determines the quality of an engine variant.
-We have skimmmed through the process of choosing the right `Metric` in previous
+We have skimmed through the process of choosing the right `Metric` in previous
 sections.
 
-This secion discusses basic evaluation metrics commonly used for
+This section discusses basic evaluation metrics commonly used for
 classification problems.
 If you are more interested in knowing how to *implement* a custom metric, 
please
 skip to [the next section](/evaluation/metricbuild/).
@@ -43,7 +43,7 @@ goal is to minimize the loss function.
 During tuning, it is important for us to understand the definition of the
 metric, to make sure it is aligned with the prediction engine's goal.
 
-In the classificaiton template, we use *Accuracy* as our metric.
+In the classification template, we use *Accuracy* as our metric.
 *Accuracy* is defined as:
 the percentage
 of queries which the engine is able to predict the correct label.
@@ -51,7 +51,7 @@ of queries which the engine is able to predict the correct 
label.
 ## Common Metrics
 
 We illustrate the choice of metric with the following confusion matrix. Row
-represents the engine predicted label, column represents the acutal label.
+represents the engine predicted label, column represents the actual label.
 The second row means that of the 200 testing data points,
 the engine predicted 60 (15 + 35 + 10) of them as label 2.0,
 among which 35 are correct prediction (i.e. actual label is 2.0, matches with
@@ -77,7 +77,7 @@ which measures the correctness among all positive labels.
 A binary classifier gives only two
 output values (i.e. positive and negative).
 For problem where there are multiple values (3 in our example),
-we first have to tranform our problem into
+we first have to transform our problem into
 a binary classification problem. For example, we can have problem whether
 label = 1.0. The confusion matrix now becomes:
 
@@ -99,7 +99,7 @@ which measures how many positive labels are successfully 
predicted amongst
 all positive labels.
 Formally, it is the ratio between the number of correct positive answer
 (true positive) and the sum of correct positive answer (true positive) and
-wrongly negatively labeled asnwer (false negative).
+wrongly negatively labeled answer (false negative).
 In this case, the recall is 30 / (30 + 15) = ~0.6667.
 
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/evaluation/paramtuning.html.md
----------------------------------------------------------------------
diff --git a/docs/manual/source/evaluation/paramtuning.html.md 
b/docs/manual/source/evaluation/paramtuning.html.md
index f9d9a9e..7046bed 100644
--- a/docs/manual/source/evaluation/paramtuning.html.md
+++ b/docs/manual/source/evaluation/paramtuning.html.md
@@ -292,7 +292,7 @@ For each point in the validation set, we construct the 
`Query` and
 
 We define a `Metric` which gives a *score* to engine params. The higher the
 score, the better the engine params are.
-In this template, we use accuray score which measures
+In this template, we use accuracy score which measures
 the portion of correct prediction among all data points.
 
 In MyClassification/src/main/scala/**Evaluation.scala**, the class
@@ -345,7 +345,7 @@ object EngineParamsList extends EngineParamsGenerator {
 }
 ```
 
-A good practise is to first define a base engine params, it contains the common
+A good practice is to first define a base engine params, it contains the common
 parameters used in all evaluations (lines 7 to 8). With the base params, we
 construct the list of engine params we want to evaluation by
 adding or replacing the controller parameter. Lines 13 to 16 generate 3 engine
@@ -417,7 +417,7 @@ The best variant params can be found in best.json
 
 ## Notes
 
-- We deliberately not metion ***test set*** in this hyperparameter tuning 
guide.
+- We deliberately not mention ***test set*** in this hyperparameter tuning 
guide.
 In machine learning literature, the ***test set*** is a separate piece of data
 which is used to evaluate the final engine params outputted by the evaluation
 process. This guarantees that no information in the training / validation set 
is

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/gallery/templates.yaml
----------------------------------------------------------------------
diff --git a/docs/manual/source/gallery/templates.yaml 
b/docs/manual/source/gallery/templates.yaml
index d3aae5f..e936ad5 100644
--- a/docs/manual/source/gallery/templates.yaml
+++ b/docs/manual/source/gallery/templates.yaml
@@ -227,7 +227,7 @@
     name: Viewed This Bought That
     repo: "https://github.com/vngrs/template-scala-parallel-viewedthenbought";
     description: |-
-      This Engine uses co-occurence algorithm to match viewed items to bought 
items. Using this engine you may predict which item the user will buy, given 
the item(s) browsed.
+      This Engine uses co-occurrence algorithm to match viewed items to bought 
items. Using this engine you may predict which item the user will buy, given 
the item(s) browsed.
     tags: [recommender]
     type: Parallel
     language: Scala
@@ -465,7 +465,7 @@
     name: classifier-kafka-streaming-template
     repo: "https://github.com/singsanj/classifier-kafka-streaming-template";
     description: |-
-      The template will provide a simple integration of DASE with kafka using 
spark streaming capabilites in order to play around with real time 
notification, messages ..
+      The template will provide a simple integration of DASE with kafka using 
spark streaming capabilities in order to play around with real time 
notification, messages ..
     tags: [classification]
     type: Parallel
     language: Scala

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/resources/faq.html.md
----------------------------------------------------------------------
diff --git a/docs/manual/source/resources/faq.html.md 
b/docs/manual/source/resources/faq.html.md
index 10412e3..b999f58 100644
--- a/docs/manual/source/resources/faq.html.md
+++ b/docs/manual/source/resources/faq.html.md
@@ -145,7 +145,7 @@ url, etc.) with it. You can supply these as pass-through 
arguments at the end of
 
 If the engine training seems stuck, it's possible that the the executor 
doesn't have enough memory.
 
-First, follow [instruction here]( 
http://spark.apache.org/docs/latest/spark-standalone.html) to start standalone 
Spark cluster and get the master URL. If you use the provided quick install 
script to install PredictionIO, the Spark is installed at 
`PredictionIO/vendors/spark-1.2.0/` where you could run the Spark commands in 
`sbin/` as described in the Spark documentation. Then use following train 
commmand to specify executor memory (default is only 512 MB) and driver memory.
+First, follow [instruction here]( 
http://spark.apache.org/docs/latest/spark-standalone.html) to start standalone 
Spark cluster and get the master URL. If you use the provided quick install 
script to install PredictionIO, the Spark is installed at 
`PredictionIO/vendors/spark-1.2.0/` where you could run the Spark commands in 
`sbin/` as described in the Spark documentation. Then use following train 
command to specify executor memory (default is only 512 MB) and driver memory.
 
 For example, the follow command set the Spark master to 
`spark://localhost:7077`
 (the default url of standalone cluster), set the driver memory to 16G and set 
the executor memory to 24G for `pio train`.

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/complementarypurchase/dase.html.md.erb
----------------------------------------------------------------------
diff --git 
a/docs/manual/source/templates/complementarypurchase/dase.html.md.erb 
b/docs/manual/source/templates/complementarypurchase/dase.html.md.erb
index 7fc8522..75a9193 100644
--- a/docs/manual/source/templates/complementarypurchase/dase.html.md.erb
+++ b/docs/manual/source/templates/complementarypurchase/dase.html.md.erb
@@ -239,13 +239,13 @@ Parameter description:
 
 - **basketWindow**: The buy event is considered as the same basket as previous 
one if the time difference is within this window (in unit of seconds). For 
example, if it's set to 120, it means that if the user buys item B within 2 
minutes of previous purchase (item A), then the item set [A, B] is considered 
as the same basket. The purchase of this *basket* is referred as one 
*transaction*.
 - **maxRuleLength**: The maximum length of the association rule length. Must 
be at least 2. For example, rule of "A implies B" has length of 2 while rule 
"A, B implies C" has a length of 3. Increasing this number will incrase the 
training time significantly because more combinations are considered.
-- **minSupport**: The minimum required *support* for the item set to be 
considered as rule (valid range is 0 to 1). It's the percentage of the item set 
appearing among all transcations. This is used to filter out infrequent item 
set. For example, setting to 0.1 means that the item set must appear in 10 % of 
all transactions.
+- **minSupport**: The minimum required *support* for the item set to be 
considered as rule (valid range is 0 to 1). It's the percentage of the item set 
appearing among all transactions. This is used to filter out infrequent item 
set. For example, setting to 0.1 means that the item set must appear in 10 % of 
all transactions.
 - **minConfidence**: The minimum *confidence* required for the rules (valid 
range is 0 to 1). The confidence indicates the probability of the condition and 
conseuquence appear in the same transaction. For example, if A appears in 30 
transactions and the item set [A, B] appears in 20 transactions, then the rule 
"A implies B" has confidence of 0.66.
 - **minLift**: The minimum *lift* required for the rule. It should be set to 1 
to find high quality rule. It's the confidence of the rule divided by the 
support of the consequence. It is used to filter out rules that the consequence 
is very frequent anyway regardless of the condition.
 - **minBasketSize**: The minimum number of items in basket to be considered by 
algorithm. This value must be at least 2.
 - **maxNumRulesPerCond**: Maximum number of rules generated per condition and 
stored in the model. By default, the top rules are sorted by *lift* score.
 
-INFO: If you import your own data and the engine doesn't return any results, 
it could be caused by the following reasons: (1) the algorithm parameter 
constraint is too high and the algo couldn't find rules that satisfy the 
condition. you could try setting the following param to 0: **minSupport**, 
**minConfidence**, **minLift** and then see if anything returned (regardless of 
recommendation quality), and then adjust the parameter accordingly. (2) the 
complementary purchase engine requires buy event with correct eventTime. If you 
import data without specifying eventTime, the SDK will use current time because 
it assumes the event happens in real time (which is not the case if you import 
as batch offline), resulting in that all buy events are treated as one big 
transcation while they should be treated as multiple transcations.
+INFO: If you import your own data and the engine doesn't return any results, 
it could be caused by the following reasons: (1) the algorithm parameter 
constraint is too high and the algo couldn't find rules that satisfy the 
condition. you could try setting the following param to 0: **minSupport**, 
**minConfidence**, **minLift** and then see if anything returned (regardless of 
recommendation quality), and then adjust the parameter accordingly. (2) the 
complementary purchase engine requires buy event with correct eventTime. If you 
import data without specifying eventTime, the SDK will use current time because 
it assumes the event happens in real time (which is not the case if you import 
as batch offline), resulting in that all buy events are treated as one big 
transaction while they should be treated as multiple transactions.
 
 
 The values of these parameters can be specified in *algorithms* of
@@ -283,7 +283,7 @@ class Algorithm(val ap: AlgorithmParams)
 
 ### train(...)
 
-`train` is called when you run **pio train** to train a predictive model. The 
algorithm first find all basket transcations, generates and filters the 
association rules based on the algorithm parameters:
+`train` is called when you run **pio train** to train a predictive model. The 
algorithm first find all basket transactions, generates and filters the 
association rules based on the algorithm parameters:
 
 ```scala
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/complementarypurchase/quickstart.html.md.erb
----------------------------------------------------------------------
diff --git 
a/docs/manual/source/templates/complementarypurchase/quickstart.html.md.erb 
b/docs/manual/source/templates/complementarypurchase/quickstart.html.md.erb
index 720f9b8..09a4e15 100644
--- a/docs/manual/source/templates/complementarypurchase/quickstart.html.md.erb
+++ b/docs/manual/source/templates/complementarypurchase/quickstart.html.md.erb
@@ -244,7 +244,7 @@ User u10 buys item s2i1 at 2014-10-19 15:43:15.618000-07:53
 
 ## 6. Use the Engine
 
-Now, You can query the engine. For example, return top 3 items which are 
frequently bought with item "s2i1". You can sending this JSON '{ "items" : 
["s2i1"], "num" : 3 }' to the deployed engine. The engine will return a JSON 
with the recommeded items.
+Now, You can query the engine. For example, return top 3 items which are 
frequently bought with item "s2i1". You can sending this JSON '{ "items" : 
["s2i1"], "num" : 3 }' to the deployed engine. The engine will return a JSON 
with the recommended items.
 
 If you include one or more items in the query, the engine will use each 
combination of the query items as condition, and return recommended items if 
there is any for this condition. For example, if you query items are ["A", 
"B"], then the engine will use ["A"], ["B"], and ["A", "B"] as condition and 
try to find top n recommended items for each combination.
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/ecommercerecommendation/dase.html.md.erb
----------------------------------------------------------------------
diff --git 
a/docs/manual/source/templates/ecommercerecommendation/dase.html.md.erb 
b/docs/manual/source/templates/ecommercerecommendation/dase.html.md.erb
index d59435a..baa0017 100644
--- a/docs/manual/source/templates/ecommercerecommendation/dase.html.md.erb
+++ b/docs/manual/source/templates/ecommercerecommendation/dase.html.md.erb
@@ -375,7 +375,7 @@ Parameter description:
 ### train(...)
 
 `train` is called when you run **pio train**. This is where MLlib ALS 
algorithm,
-i.e. `ALS.trainImplicit()`, is used to train a predictive model. In addition, 
we also count the number of items being bought for each item as default model 
which will be used when there is no ALS model avaiable or other useful 
information about the user is avaiable during `predict`.
+i.e. `ALS.trainImplicit()`, is used to train a predictive model. In addition, 
we also count the number of items being bought for each item as default model 
which will be used when there is no ALS model available or other useful 
information about the user is available during `predict`.
 
 ```scala
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/ecommercerecommendation/quickstart.html.md.erb
----------------------------------------------------------------------
diff --git 
a/docs/manual/source/templates/ecommercerecommendation/quickstart.html.md.erb 
b/docs/manual/source/templates/ecommercerecommendation/quickstart.html.md.erb
index 4fe3955..2f2670c 100644
--- 
a/docs/manual/source/templates/ecommercerecommendation/quickstart.html.md.erb
+++ 
b/docs/manual/source/templates/ecommercerecommendation/quickstart.html.md.erb
@@ -548,7 +548,7 @@ The following is sample JSON response:
 
 Now let's send a item contraint "unavailableItems" (replace accessKey with 
your Access Key):
 
-NOTE: You can also use SDK to send this event as decribed in the SDK sample 
above.
+NOTE: You can also use SDK to send this event as described in the SDK sample 
above.
 
 <div class="tabs">
   <div data-tab="REST API" data-lang="json">

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/leadscoring/dase.html.md.erb
----------------------------------------------------------------------
diff --git a/docs/manual/source/templates/leadscoring/dase.html.md.erb 
b/docs/manual/source/templates/leadscoring/dase.html.md.erb
index e8abda2..6cee64c 100644
--- a/docs/manual/source/templates/leadscoring/dase.html.md.erb
+++ b/docs/manual/source/templates/leadscoring/dase.html.md.erb
@@ -441,7 +441,7 @@ http://localhost:8000/queries.json. PredictionIO converts 
the query, such as '{
 The `predict()` function does the following:
 
 1. convert the Query to the required feature vector input
-2. use the `RandomForestModel` to predict the probabilty of conversion given 
this feature.
+2. use the `RandomForestModel` to predict the probability of conversion given 
this feature.
 
 ```scala
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/recommendation/batch-evaluator.html.md
----------------------------------------------------------------------
diff --git 
a/docs/manual/source/templates/recommendation/batch-evaluator.html.md 
b/docs/manual/source/templates/recommendation/batch-evaluator.html.md
index 4e872bc..d5eb8b2 100644
--- a/docs/manual/source/templates/recommendation/batch-evaluator.html.md
+++ b/docs/manual/source/templates/recommendation/batch-evaluator.html.md
@@ -70,7 +70,7 @@ NOTE: Alternatively, you can create a new DataSource 
extending original DataSour
 
 ## 2. Add a new Evaluator
 
-Create a new file `BatchPersistableEvaluator.scala`. Unlike the 
`MetricEvaluator`, this Evaluator simply writes the Query and correpsonding 
PredictedResult to the output directory without performaning any metrics 
calculation.
+Create a new file `BatchPersistableEvaluator.scala`. Unlike the 
`MetricEvaluator`, this Evaluator simply writes the Query and corresponding 
PredictedResult to the output directory without performing any metrics 
calculation.
 
 Note that output directory is specified by the variable `outputDir`.
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/recommendation/evaluation.html.md.erb
----------------------------------------------------------------------
diff --git a/docs/manual/source/templates/recommendation/evaluation.html.md.erb 
b/docs/manual/source/templates/recommendation/evaluation.html.md.erb
index 564c83c..fc4ca43 100644
--- a/docs/manual/source/templates/recommendation/evaluation.html.md.erb
+++ b/docs/manual/source/templates/recommendation/evaluation.html.md.erb
@@ -151,7 +151,7 @@ event is mapped to a rating of 4. When we
 implement the metric, we have to specify a rating threshold, only the rating
 above the threshold is considered 'good'.
 
-- The absense of complete rating. It is extremely unlikely that the training
+- The absence of complete rating. It is extremely unlikely that the training
 data contains rating for all user-item tuples. In contrast, of a system 
containing
 1000 items, a user may only have rated 20 of them, leaving 980 items unrated. 
There
 is no way for us to certainly tell if the user likes an unrated product.

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/recommendation/quickstart.html.md.erb
----------------------------------------------------------------------
diff --git a/docs/manual/source/templates/recommendation/quickstart.html.md.erb 
b/docs/manual/source/templates/recommendation/quickstart.html.md.erb
index 164095e..b5dae35 100644
--- a/docs/manual/source/templates/recommendation/quickstart.html.md.erb
+++ b/docs/manual/source/templates/recommendation/quickstart.html.md.erb
@@ -42,7 +42,7 @@ NOTE: You can customize to use other event.
 ### Input Query
 
 - user ID
-- num of recomended items
+- num of recommended items
 
 ### Output PredictedResult
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/similarproduct/dase.html.md.erb
----------------------------------------------------------------------
diff --git a/docs/manual/source/templates/similarproduct/dase.html.md.erb 
b/docs/manual/source/templates/similarproduct/dase.html.md.erb
index d9dd677..030ee75 100644
--- a/docs/manual/source/templates/similarproduct/dase.html.md.erb
+++ b/docs/manual/source/templates/similarproduct/dase.html.md.erb
@@ -454,7 +454,7 @@ case class ALSAlgorithmParams(
 
 The `seed` parameter is an optional parameter, which is used by MLlib ALS 
algorithm internally to generate random values. If the `seed` is not specified, 
current system time would be used and hence each train may produce different 
reuslts. Specify a fixed value for the `seed` if you want to have deterministic 
result (For example, when you are testing).
 
-`ALS.trainImplicit()` then returns a `MatrixFactorizationModel` model which 
contains two RDDs: userFeatures and productFeatures. They correspond to the 
user X latent features matrix and item X latent features matrix, respectively. 
In this case, we will make use of the productFeatures matrix to find simliar 
products by comparing the similarity of the latent features. Hence, we store 
this productFeatures as defined in `ALSModel` class:
+`ALS.trainImplicit()` then returns a `MatrixFactorizationModel` model which 
contains two RDDs: userFeatures and productFeatures. They correspond to the 
user X latent features matrix and item X latent features matrix, respectively. 
In this case, we will make use of the productFeatures matrix to find similar 
products by comparing the similarity of the latent features. Hence, we store 
this productFeatures as defined in `ALSModel` class:
 
 ```scala
 class ALSModel(

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/templates/similarproduct/multi-events-multi-algos.html.md.erb
----------------------------------------------------------------------
diff --git 
a/docs/manual/source/templates/similarproduct/multi-events-multi-algos.html.md.erb
 
b/docs/manual/source/templates/similarproduct/multi-events-multi-algos.html.md.erb
index d8f2d4b..1e21b49 100644
--- 
a/docs/manual/source/templates/similarproduct/multi-events-multi-algos.html.md.erb
+++ 
b/docs/manual/source/templates/similarproduct/multi-events-multi-algos.html.md.erb
@@ -25,7 +25,7 @@ The [default algorithm described in 
DASE](dase.html#algorithm) uses user-to-item
 
 In this example, we will add another algorithm to process like/dislike events. 
The final PredictedResults will be the combined outputs of both algorithms.
 
-NOTE: This is just one of the ways to handle mutliple types of events. We use 
this use case to demonstrate how one can build an engine with multiple 
algorithms. You may also build one single algorithm which takes different 
events into account without using multiple algorithms.
+NOTE: This is just one of the ways to handle multiple types of events. We use 
this use case to demonstrate how one can build an engine with multiple 
algorithms. You may also build one single algorithm which takes different 
events into account without using multiple algorithms.
 
 This example will demonstrate the following:
 
@@ -397,7 +397,7 @@ Next, in order to train and deploy two algorithms for this 
engine, we also need
 
 ```
 
-INFO: You may notice that the parameters of the new `"likealgo"` contains the 
same fields as `"als"`. It is just becasuse the `LikeAlgorithm` class extends 
the original `ALSAlgorithm` class and shares the same algorithm parameter class 
defintion. If the other algorithm you add has its own parameter class, you just 
need to specify them inside its `params` field accordingly.
+INFO: You may notice that the parameters of the new `"likealgo"` contains the 
same fields as `"als"`. It is just becasuse the `LikeAlgorithm` class extends 
the original `ALSAlgorithm` class and shares the same algorithm parameter class 
definition. If the other algorithm you add has its own parameter class, you 
just need to specify them inside its `params` field accordingly.
 
 That's it! Now you have a engine configured with two algorithms.
 

http://git-wip-us.apache.org/repos/asf/predictionio/blob/54415e10/docs/manual/source/tryit/index.html.slim
----------------------------------------------------------------------
diff --git a/docs/manual/source/tryit/index.html.slim 
b/docs/manual/source/tryit/index.html.slim
index ccadcf0..a9bae7a 100644
--- a/docs/manual/source/tryit/index.html.slim
+++ b/docs/manual/source/tryit/index.html.slim
@@ -20,12 +20,12 @@ title: Try PredictionIO
 
           li
             h2 Start
-            p Start PredictionIO and it's dependancies with:
+            p Start PredictionIO and it's dependencies with:
             p <code>$ pio-start-all</code>
             h2 Check Status
             p At any time you can run:
             p <code>$ pio status</code>
-            p Which checks the status of PredictionIO and it's dependancies.
+            p Which checks the status of PredictionIO and it's dependencies.
 
           li
             h2 Build
@@ -49,7 +49,7 @@ title: Try PredictionIO
             h2 Querying Results
             p You can query results using cURL with after you see "Ready to 
serve":
             p <code>$ curl -H "Content-Type: application/json" -d '{ "items": 
["296"], "num": 5 }' http://localhost:8000/queries.json</code>
-            p This will return 5 recomended movies for movie 296 (Pulp 
Fiction).
+            p This will return 5 recommended movies for movie 296 (Pulp 
Fiction).
 
           li
             h2 Next Steps

Reply via email to