Repository: incubator-hivemall
Updated Branches:
  refs/heads/master 7ec82a6a8 -> 11bd1f83e


Close #101: [HIVEMALL-108-3] Describe generic predictors' auxiliary options in 
document


Project: http://git-wip-us.apache.org/repos/asf/incubator-hivemall/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-hivemall/commit/11bd1f83
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hivemall/tree/11bd1f83
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hivemall/diff/11bd1f83

Branch: refs/heads/master
Commit: 11bd1f83e68a7fbd2e0cc7143303e35e32edf692
Parents: 7ec82a6
Author: Takuya Kitazawa <[email protected]>
Authored: Tue Jul 18 14:52:37 2017 +0900
Committer: Makoto Yui <[email protected]>
Committed: Tue Jul 18 14:52:37 2017 +0900

----------------------------------------------------------------------
 .../java/hivemall/common/ConversionState.java   |  4 +--
 docs/gitbook/misc/prediction.md                 | 32 +++++++++++++++-----
 2 files changed, 27 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-hivemall/blob/11bd1f83/core/src/main/java/hivemall/common/ConversionState.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/hivemall/common/ConversionState.java 
b/core/src/main/java/hivemall/common/ConversionState.java
index ff92241..7b5923f 100644
--- a/core/src/main/java/hivemall/common/ConversionState.java
+++ b/core/src/main/java/hivemall/common/ConversionState.java
@@ -81,7 +81,7 @@ public final class ConversionState {
         return currLosses > prevLosses;
     }
 
-    public boolean isConverged(final long obserbedTrainingExamples) {
+    public boolean isConverged(final long observedTrainingExamples) {
         if (conversionCheck == false) {
             return false;
         }
@@ -110,7 +110,7 @@ public final class ConversionState {
             if (logger.isDebugEnabled()) {
                 logger.debug("Iteration #" + curIter + " [curLosses=" + 
currLosses
                         + ", prevLosses=" + prevLosses + ", changeRate=" + 
changeRate
-                        + ", #trainingExamples=" + obserbedTrainingExamples + 
']');
+                        + ", #trainingExamples=" + observedTrainingExamples + 
']');
             }
             this.readyToFinishIterations = false;
         }

http://git-wip-us.apache.org/repos/asf/incubator-hivemall/blob/11bd1f83/docs/gitbook/misc/prediction.md
----------------------------------------------------------------------
diff --git a/docs/gitbook/misc/prediction.md b/docs/gitbook/misc/prediction.md
index 317d688..ee85e40 100644
--- a/docs/gitbook/misc/prediction.md
+++ b/docs/gitbook/misc/prediction.md
@@ -109,8 +109,8 @@ Below we list possible options for `train_regression` and 
`train_classifier`, an
        - For `train_regression`
                - SquaredLoss (synonym: squared)
                - QuantileLoss (synonym: quantile)
-               - EpsilonInsensitiveLoss (synonym: epsilon_intensitive)
-               - SquaredEpsilonInsensitiveLoss (synonym: 
squared_epsilon_intensitive)
+               - EpsilonInsensitiveLoss (synonym: epsilon_insensitive)
+               - SquaredEpsilonInsensitiveLoss (synonym: 
squared_epsilon_insensitive)
                - HuberLoss (synonym: huber)
        - For `train_classifier`
                - HingeLoss (synonym: hinge)
@@ -120,8 +120,8 @@ Below we list possible options for `train_regression` and 
`train_classifier`, an
                - The following losses are mainly designed for regression but 
can sometimes be useful in classification as well:
                  - SquaredLoss (synonym: squared)
                  - QuantileLoss (synonym: quantile)
-                 - EpsilonInsensitiveLoss (synonym: epsilon_intensitive)
-                 - SquaredEpsilonInsensitiveLoss (synonym: 
squared_epsilon_intensitive)
+                 - EpsilonInsensitiveLoss (synonym: epsilon_insensitive)
+                 - SquaredEpsilonInsensitiveLoss (synonym: 
squared_epsilon_insensitive)
                  - HuberLoss (synonym: huber)
 
 - Regularization function: `-reg`, `-regularization`
@@ -130,9 +130,9 @@ Below we list possible options for `train_regression` and 
`train_classifier`, an
        - ElasticNet
        - RDA
        
-Additionally, there are several variants of the SGD technique, and it is also 
configureable as:
+Additionally, there are several variants of the SGD technique, and it is also 
configurable as:
 
-- Optimizer `-opt`, `-optimizer`
+- Optimizer: `-opt`, `-optimizer`
        - SGD
        - AdaGrad
        - AdaDelta
@@ -140,6 +140,24 @@ Additionally, there are several variants of the SGD 
technique, and it is also co
 
 > #### Note
 >
-> Option values are case insensitive and you can use `sgd` or `rda`, or 
`huberloss`.
+> Option values are case insensitive and you can use `sgd` or `rda`, or 
`huberloss` in lower-case letters.
+
+Furthermore, optimizer offers to set auxiliary options such as:
+
+- Number of iterations: `-iter`, `-iterations` [default: 10]
+       - Repeat optimizer's learning procedure more than once to diligently 
find better result.
+- Convergence rate: `-cv_rate`, `-convergence_rate` [default: 0.005]
+       - Define a stopping criterion for the iterative training.
+       - If the criterion is too small or too large, you may encounter 
over-fitting or under-fitting depending on value of `-iter` option.
+- Mini-batch size: `-mini_batch`, `-mini_batch_size` [default: 1]
+       - Instead of learning samples one-by-one, this option enables optimizer 
to utilize multiple samples at once to minimize the error function.
+       - Appropriate mini-batch size leads efficient training and effective 
prediction model.
+
+For details of available options, following queries might be helpful to list 
all of them:
+
+```sql
+select train_regression(array(), 0, '-help');
+select train_classifier(array(), 0, '-help');
+```
 
 In practice, you can try different combinations of the options in order to 
achieve higher prediction accuracy.
\ No newline at end of file

Reply via email to