Repository: incubator-systemml
Updated Branches:
  refs/heads/gh-pages 433da61b5 -> 540b76d24


Update spark-mlcontext-programming-guide.md

Add code for Spark 1.6 since the example breaks on Spark 1.6

Closes #245, #246.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/540b76d2
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/540b76d2
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/540b76d2

Branch: refs/heads/gh-pages
Commit: 540b76d24ac1ba9894f495851a048a2926bff6e4
Parents: 433da61
Author: Romeo Kienzler <romeo.kienz...@gmail.com>
Authored: Fri Sep 16 12:29:43 2016 -0700
Committer: Deron Eriksson <de...@us.ibm.com>
Committed: Fri Sep 16 12:29:43 2016 -0700

----------------------------------------------------------------------
 spark-mlcontext-programming-guide.md | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/540b76d2/spark-mlcontext-programming-guide.md
----------------------------------------------------------------------
diff --git a/spark-mlcontext-programming-guide.md 
b/spark-mlcontext-programming-guide.md
index c7b2bb6..c446d1e 100644
--- a/spark-mlcontext-programming-guide.md
+++ b/spark-mlcontext-programming-guide.md
@@ -2284,6 +2284,9 @@ val numRows = 10000
 val numCols = 1000
 val rawData = LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 
1).toDF()
 
+//For Spark version <= 1.6.0 you can use createDataFrame() (comment the line 
above and uncomment the line below), and for Spark version >= 1.6.1 use .toDF()
+//val rawData = 
sqlContext.createDataFrame(LinearDataGenerator.generateLinearRDD(sc, numRows, 
numCols, 1))
+
 // Repartition into a more parallelism-friendly number of partitions
 val data = rawData.repartition(64).cache()
 {% endhighlight %}

Reply via email to