Repository: spark
Updated Branches:
  refs/heads/master adbd5c163 -> 1208f72ac


[SPARK-2839][MLlib] Stats Toolkit documentation updated

Documentation updated for the Statistics Toolkit of MLlib. mengxr atalwalkar

https://issues.apache.org/jira/browse/SPARK-2839

P.S. Accidentally closed #2123. New commits didn't show up after I reopened the 
PR. I've opened this instead and closed the old one.

Author: Burak <[email protected]>

Closes #2130 from brkyvz/StatsLib-Docs and squashes the following commits:

a54a855 [Burak] [SPARK-2839][MLlib] Addressed comments
bfc6896 [Burak] [SPARK-2839][MLlib] Added a more specific link to colStats() 
for pyspark
213fe3f [Burak] [SPARK-2839][MLlib] Modifications made according to review
fec4d9d [Burak] [SPARK-2830][MLlib] Stats Toolkit documentation updated


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1208f72a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1208f72a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1208f72a

Branch: refs/heads/master
Commit: 1208f72ac78960fe5060187761479b2a9a417c1b
Parents: adbd5c1
Author: Burak <[email protected]>
Authored: Tue Aug 26 15:18:42 2014 -0700
Committer: Xiangrui Meng <[email protected]>
Committed: Tue Aug 26 15:18:42 2014 -0700

----------------------------------------------------------------------
 docs/mllib-stats.md | 372 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 331 insertions(+), 41 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/1208f72a/docs/mllib-stats.md
----------------------------------------------------------------------
diff --git a/docs/mllib-stats.md b/docs/mllib-stats.md
index f25dca7..511a9fb 100644
--- a/docs/mllib-stats.md
+++ b/docs/mllib-stats.md
@@ -25,6 +25,85 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - 
Statistics Functionality
 \newcommand{\zero}{\mathbf{0}}
 \]`
 
+## Summary Statistics 
+
+We provide column summary statistics for `RDD[Vector]` through the function 
`colStats` 
+available in `Statistics`.
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+
+[`colStats()`](api/scala/index.html#org.apache.spark.mllib.stat.Statistics$) 
returns an instance of
+[`MultivariateStatisticalSummary`](api/scala/index.html#org.apache.spark.mllib.stat.MultivariateStatisticalSummary),
+which contains the column-wise max, min, mean, variance, and number of 
nonzeros, as well as the
+total count.
+
+{% highlight scala %}
+import org.apache.spark.mllib.linalg.Vector
+import org.apache.spark.mllib.stat.{MultivariateStatisticalSummary, Statistics}
+
+val observations: RDD[Vector] = ... // an RDD of Vectors
+
+// Compute column summary statistics.
+val summary: MultivariateStatisticalSummary = Statistics.colStats(observations)
+println(summary.mean) // a dense vector containing the mean value for each 
column
+println(summary.variance) // column-wise variance
+println(summary.numNonzeros) // number of nonzeros in each column
+
+{% endhighlight %}
+</div>
+
+<div data-lang="java" markdown="1">
+
+[`colStats()`](api/java/org/apache/spark/mllib/stat/Statistics.html) returns 
an instance of
+[`MultivariateStatisticalSummary`](api/java/org/apache/spark/mllib/stat/MultivariateStatisticalSummary.html),
+which contains the column-wise max, min, mean, variance, and number of 
nonzeros, as well as the
+total count.
+
+{% highlight java %}
+import org.apache.spark.api.java.JavaRDD;
+import org.apache.spark.api.java.JavaSparkContext;
+import org.apache.spark.mllib.linalg.Vector;
+import org.apache.spark.mllib.stat.MultivariateStatisticalSummary;
+import org.apache.spark.mllib.stat.Statistics;
+
+JavaSparkContext jsc = ...
+
+JavaRDD<Vector> mat = ... // an RDD of Vectors
+
+// Compute column summary statistics.
+MultivariateStatisticalSummary summary = Statistics.colStats(mat.rdd());
+System.out.println(summary.mean()); // a dense vector containing the mean 
value for each column
+System.out.println(summary.variance()); // column-wise variance
+System.out.println(summary.numNonzeros()); // number of nonzeros in each column
+
+{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+[`colStats()`](api/python/pyspark.mllib.stat.Statistics-class.html#colStats) 
returns an instance of
+[`MultivariateStatisticalSummary`](api/python/pyspark.mllib.stat.MultivariateStatisticalSummary-class.html),
+which contains the column-wise max, min, mean, variance, and number of 
nonzeros, as well as the
+total count.
+
+{% highlight python %}
+from pyspark.mllib.stat import Statistics
+
+sc = ... # SparkContext
+
+mat = ... # an RDD of Vectors
+
+# Compute column summary statistics.
+summary = Statistics.colStats(mat)
+print summary.mean()
+print summary.variance()
+print summary.numNonzeros()
+
+{% endhighlight %}
+</div>
+
+</div>
+
 ## Random data generation
 
 Random data generation is useful for randomized algorithms, prototyping, and 
performance testing.
@@ -99,69 +178,280 @@ v = u.map(lambda x: 1.0 + 2.0 * x)
 
 </div>
 
-## Stratified Sampling 
+## Correlations calculation
 
-## Summary Statistics 
+Calculating the correlation between two series of data is a common operation 
in Statistics. In MLlib
+we provide the flexibility to calculate pairwise correlations among many 
series. The supported 
+correlation methods are currently Pearson's and Spearman's correlation.
+ 
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+[`Statistics`](api/scala/index.html#org.apache.spark.mllib.stat.Statistics$) 
provides methods to 
+calculate correlations between series. Depending on the type of input, two 
`RDD[Double]`s or 
+an `RDD[Vector]`, the output will be a `Double` or the correlation `Matrix` 
respectively.
+
+{% highlight scala %}
+import org.apache.spark.SparkContext
+import org.apache.spark.mllib.linalg._
+import org.apache.spark.mllib.stat.Statistics
+
+val sc: SparkContext = ...
+
+val seriesX: RDD[Double] = ... // a series
+val seriesY: RDD[Double] = ... // must have the same number of partitions and 
cardinality as seriesX
+
+// compute the correlation using Pearson's method. Enter "spearman" for 
Spearman's method. If a 
+// method is not specified, Pearson's method will be used by default. 
+val correlation: Double = Statistics.corr(seriesX, seriesY, "pearson")
+
+val data: RDD[Vector] = ... // note that each Vector is a row and not a column
+
+// calculate the correlation matrix using Pearson's method. Use "spearman" for 
Spearman's method.
+// If a method is not specified, Pearson's method will be used by default. 
+val correlMatrix: Matrix = Statistics.corr(data, "pearson")
+
+{% endhighlight %}
+</div>
+
+<div data-lang="java" markdown="1">
+[`Statistics`](api/java/org/apache/spark/mllib/stat/Statistics.html) provides 
methods to 
+calculate correlations between series. Depending on the type of input, two 
`JavaDoubleRDD`s or 
+a `JavaRDD<Vector>`, the output will be a `Double` or the correlation `Matrix` 
respectively.
+
+{% highlight java %}
+import org.apache.spark.api.java.JavaDoubleRDD;
+import org.apache.spark.api.java.JavaSparkContext;
+import org.apache.spark.mllib.linalg.*;
+import org.apache.spark.mllib.stat.Statistics;
+
+JavaSparkContext jsc = ...
+
+JavaDoubleRDD seriesX = ... // a series
+JavaDoubleRDD seriesY = ... // must have the same number of partitions and 
cardinality as seriesX
+
+// compute the correlation using Pearson's method. Enter "spearman" for 
Spearman's method. If a 
+// method is not specified, Pearson's method will be used by default. 
+Double correlation = Statistics.corr(seriesX.srdd(), seriesY.srdd(), 
"pearson");
+
+JavaRDD<Vector> data = ... // note that each Vector is a row and not a column
+
+// calculate the correlation matrix using Pearson's method. Use "spearman" for 
Spearman's method.
+// If a method is not specified, Pearson's method will be used by default. 
+Matrix correlMatrix = Statistics.corr(data.rdd(), "pearson");
+
+{% endhighlight %}
+</div>
 
-### Multivariate summary statistics
+<div data-lang="python" markdown="1">
+[`Statistics`](api/python/pyspark.mllib.stat.Statistics-class.html) provides 
methods to 
+calculate correlations between series. Depending on the type of input, two 
`RDD[Double]`s or 
+an `RDD[Vector]`, the output will be a `Double` or the correlation `Matrix` 
respectively.
+
+{% highlight python %}
+from pyspark.mllib.stat import Statistics
+
+sc = ... # SparkContext
+
+seriesX = ... # a series
+seriesY = ... # must have the same number of partitions and cardinality as 
seriesX
 
-We provide column summary statistics for `RowMatrix` (note: this functionality 
is not currently supported in `IndexedRowMatrix` or `CoordinateMatrix`). 
-If the number of columns is not large, e.g., on the order of thousands, then 
the 
-covariance matrix can also be computed as a local matrix, which requires 
$\mathcal{O}(n^2)$ storage where $n$ is the
-number of columns. The total CPU time is $\mathcal{O}(m n^2)$, where $m$ is 
the number of rows,
-and is faster if the rows are sparse.
+# Compute the correlation using Pearson's method. Enter "spearman" for 
Spearman's method. If a 
+# method is not specified, Pearson's method will be used by default. 
+print Statistics.corr(seriesX, seriesY, method="pearson")
+
+data = ... # an RDD of Vectors
+# calculate the correlation matrix using Pearson's method. Use "spearman" for 
Spearman's method.
+# If a method is not specified, Pearson's method will be used by default. 
+print Statistics.corr(data, method="pearson")
+
+{% endhighlight %}
+</div>
+
+</div>
+
+## Stratified sampling
+
+Unlike the other statistics functions, which reside in MLLib, stratified 
sampling methods, 
+`sampleByKey` and `sampleByKeyExact`, can be performed on RDD's of key-value 
pairs. For stratified
+sampling, the keys can be thought of as a label and the value as a specific 
attribute. For example 
+the key can be man or woman, or document ids, and the respective values can be 
the list of ages 
+of the people in the population or the list of words in the documents. The 
`sampleByKey` method 
+will flip a coin to decide whether an observation will be sampled or not, 
therefore requires one 
+pass over the data, and provides an *expected* sample size. `sampleByKeyExact` 
requires significant 
+more resources than the per-stratum simple random sampling used in 
`sampleByKey`, but will provide
+the exact sampling size with 99.99% confidence. `sampleByKeyExact` is 
currently not supported in 
+python.
 
 <div class="codetabs">
 <div data-lang="scala" markdown="1">
-
-[`computeColumnSummaryStatistics()`](api/scala/index.html#org.apache.spark.mllib.linalg.distributed.RowMatrix)
 returns an instance of
-[`MultivariateStatisticalSummary`](api/scala/index.html#org.apache.spark.mllib.stat.MultivariateStatisticalSummary),
-which contains the column-wise max, min, mean, variance, and number of 
nonzeros, as well as the
-total count.
+[`sampleByKeyExact()`](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions)
 allows users to
+sample exactly $\lceil f_k \cdot n_k \rceil \, \forall k \in K$ items, where 
$f_k$ is the desired 
+fraction for key $k$, $n_k$ is the number of key-value pairs for key $k$, and 
$K$ is the set of
+keys. Sampling without replacement requires one additional pass over the RDD 
to guarantee sample 
+size, whereas sampling with replacement requires two additional passes.
 
 {% highlight scala %}
-import org.apache.spark.mllib.linalg.Matrix
-import org.apache.spark.mllib.linalg.distributed.RowMatrix
-import org.apache.spark.mllib.stat.MultivariateStatisticalSummary
+import org.apache.spark.SparkContext
+import org.apache.spark.SparkContext._
+import org.apache.spark.rdd.PairRDDFunctions
 
-val mat: RowMatrix = ... // a RowMatrix
+val sc: SparkContext = ...
 
-// Compute column summary statistics.
-val summary: MultivariateStatisticalSummary = 
mat.computeColumnSummaryStatistics()
-println(summary.mean) // a dense vector containing the mean value for each 
column
-println(summary.variance) // column-wise variance
-println(summary.numNonzeros) // number of nonzeros in each column
+val data = ... // an RDD[(K, V)] of any key value pairs
+val fractions: Map[K, Double] = ... // specify the exact fraction desired from 
each key
+
+// Get an exact sample from each stratum
+val approxSample = data.sampleByKey(withReplacement = false, fractions)
+val exactSample = data.sampleByKeyExact(withReplacement = false, fractions)
 
-// Compute the covariance matrix.
-val cov: Matrix = mat.computeCovariance()
 {% endhighlight %}
 </div>
 
 <div data-lang="java" markdown="1">
-
-[`RowMatrix#computeColumnSummaryStatistics`](api/java/org/apache/spark/mllib/linalg/distributed/RowMatrix.html#computeColumnSummaryStatistics())
 returns an instance of
-[`MultivariateStatisticalSummary`](api/java/org/apache/spark/mllib/stat/MultivariateStatisticalSummary.html),
-which contains the column-wise max, min, mean, variance, and number of 
nonzeros, as well as the
-total count.
+[`sampleByKeyExact()`](api/java/org/apache/spark/api/java/JavaPairRDD.html) 
allows users to
+sample exactly $\lceil f_k \cdot n_k \rceil \, \forall k \in K$ items, where 
$f_k$ is the desired 
+fraction for key $k$, $n_k$ is the number of key-value pairs for key $k$, and 
$K$ is the set of
+keys. Sampling without replacement requires one additional pass over the RDD 
to guarantee sample 
+size, whereas sampling with replacement requires two additional passes.
 
 {% highlight java %}
-import org.apache.spark.mllib.linalg.Matrix;
-import org.apache.spark.mllib.linalg.distributed.RowMatrix;
-import org.apache.spark.mllib.stat.MultivariateStatisticalSummary;
+import java.util.Map;
 
-RowMatrix mat = ... // a RowMatrix
+import org.apache.spark.api.java.JavaPairRDD;
+import org.apache.spark.api.java.JavaSparkContext;
 
-// Compute column summary statistics.
-MultivariateStatisticalSummary summary = mat.computeColumnSummaryStatistics();
-System.out.println(summary.mean()); // a dense vector containing the mean 
value for each column
-System.out.println(summary.variance()); // column-wise variance
-System.out.println(summary.numNonzeros()); // number of nonzeros in each column
+JavaSparkContext jsc = ...
+
+JavaPairRDD<K, V> data = ... // an RDD of any key value pairs
+Map<K, Object> fractions = ... // specify the exact fraction desired from each 
key
+
+// Get an exact sample from each stratum
+JavaPairRDD<K, V> approxSample = data.sampleByKey(false, fractions);
+JavaPairRDD<K, V> exactSample = data.sampleByKeyExact(false, fractions);
+
+{% endhighlight %}
+</div>
+<div data-lang="python" markdown="1">
+[`sampleByKey()`](api/python/pyspark.rdd.RDD-class.html#sampleByKey) allows 
users to
+sample approximately $\lceil f_k \cdot n_k \rceil \, \forall k \in K$ items, 
where $f_k$ is the 
+desired fraction for key $k$, $n_k$ is the number of key-value pairs for key 
$k$, and $K$ is the 
+set of keys.
+
+*Note:* `sampleByKeyExact()` is currently not supported in Python.
+
+{% highlight python %}
+
+sc = ... # SparkContext
+
+data = ... # an RDD of any key value pairs
+fractions = ... # specify the exact fraction desired from each key as a 
dictionary
+
+approxSample = data.sampleByKey(False, fractions);
 
-// Compute the covariance matrix.
-Matrix cov = mat.computeCovariance();
 {% endhighlight %}
 </div>
+
+</div>
+
+## Hypothesis testing
+
+Hypothesis testing is a powerful tool in statistics to determine whether a 
result is statistically 
+significant, whether this result occurred by chance or not. MLlib currently 
supports Pearson's 
+chi-squared ( $\chi^2$) tests for goodness of fit and independence. The input 
data types determine 
+whether the goodness of fit or the independence test is conducted. The 
goodness of fit test requires 
+an input type of `Vector`, whereas the independence test requires a `Matrix` 
as input.
+
+MLlib also supports the input type `RDD[LabeledPoint]` to enable feature 
selection via chi-squared 
+independence tests.
+
+<div class="codetabs">
+<div data-lang="scala" markdown="1">
+[`Statistics`](api/scala/index.html#org.apache.spark.mllib.stat.Statistics$) 
provides methods to 
+run Pearson's chi-squared tests. The following example demonstrates how to run 
and interpret 
+hypothesis tests.
+
+{% highlight scala %}
+import org.apache.spark.SparkContext
+import org.apache.spark.mllib.linalg._
+import org.apache.spark.mllib.regression.LabeledPoint
+import org.apache.spark.mllib.stat.Statistics._
+
+val sc: SparkContext = ...
+
+val vec: Vector = ... // a vector composed of the frequencies of events
+
+// compute the goodness of fit. If a second vector to test against is not 
supplied as a parameter, 
+// the test runs against a uniform distribution.  
+val goodnessOfFitTestResult = Statistics.chiSqTest(vec)
+println(goodnessOfFitTestResult) // summary of the test including the p-value, 
degrees of freedom, 
+                                 // test statistic, the method used, and the 
null hypothesis.
+
+val mat: Matrix = ... // a contingency matrix
+
+// conduct Pearson's independence test on the input contingency matrix
+val independenceTestResult = Statistics.chiSqTest(mat) 
+println(independenceTestResult) // summary of the test including the p-value, 
degrees of freedom...
+
+val obs: RDD[LabeledPoint] = ... // (feature, label) pairs.
+
+// The contingency table is constructed from the raw (feature, label) pairs 
and used to conduct
+// the independence test. Returns an array containing the ChiSquaredTestResult 
for every feature 
+// against the label.
+val featureTestResults: Array[ChiSqTestResult] = Statistics.chiSqTest(obs)
+var i = 1
+featureTestResults.foreach { result =>
+    println(s"Column $i:\n$result")
+    i += 1
+} // summary of the test 
+
+{% endhighlight %}
 </div>
 
+<div data-lang="java" markdown="1">
+[`Statistics`](api/java/org/apache/spark/mllib/stat/Statistics.html) provides 
methods to 
+run Pearson's chi-squared tests. The following example demonstrates how to run 
and interpret 
+hypothesis tests.
+
+{% highlight java %}
+import org.apache.spark.api.java.JavaRDD;
+import org.apache.spark.api.java.JavaSparkContext;
+import org.apache.spark.mllib.linalg.*;
+import org.apache.spark.mllib.regression.LabeledPoint;
+import org.apache.spark.mllib.stat.Statistics;
+import org.apache.spark.mllib.stat.test.ChiSqTestResult;
+
+JavaSparkContext jsc = ...
 
-## Hypothesis Testing 
+Vector vec = ... // a vector composed of the frequencies of events
+
+// compute the goodness of fit. If a second vector to test against is not 
supplied as a parameter, 
+// the test runs against a uniform distribution.  
+ChiSqTestResult goodnessOfFitTestResult = Statistics.chiSqTest(vec);
+// summary of the test including the p-value, degrees of freedom, test 
statistic, the method used, 
+// and the null hypothesis.
+System.out.println(goodnessOfFitTestResult);
+
+Matrix mat = ... // a contingency matrix
+
+// conduct Pearson's independence test on the input contingency matrix
+ChiSqTestResult independenceTestResult = Statistics.chiSqTest(mat);
+// summary of the test including the p-value, degrees of freedom...
+System.out.println(independenceTestResult);
+
+JavaRDD<LabeledPoint> obs = ... // an RDD of labeled points
+
+// The contingency table is constructed from the raw (feature, label) pairs 
and used to conduct
+// the independence test. Returns an array containing the ChiSquaredTestResult 
for every feature 
+// against the label.
+ChiSqTestResult[] featureTestResults = Statistics.chiSqTest(obs.rdd());
+int i = 1;
+for (ChiSqTestResult result : featureTestResults) {
+    System.out.println("Column " + i + ":");
+    System.out.println(result); // summary of the test
+    i++;
+}
+
+{% endhighlight %}
+</div>
+
+</div>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to