incubator-systemml git commit: Update spark-mlcontext-programming-guide.md

2016-09-16 Thread deron
Repository: incubator-systemml
Updated Branches:
  refs/heads/gh-pages 433da61b5 -> 540b76d24


Update spark-mlcontext-programming-guide.md

Add code for Spark 1.6 since the example breaks on Spark 1.6

Closes #245, #246.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/540b76d2
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/540b76d2
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/540b76d2

Branch: refs/heads/gh-pages
Commit: 540b76d24ac1ba9894f495851a048a2926bff6e4
Parents: 433da61
Author: Romeo Kienzler 
Authored: Fri Sep 16 12:29:43 2016 -0700
Committer: Deron Eriksson 
Committed: Fri Sep 16 12:29:43 2016 -0700

--
 spark-mlcontext-programming-guide.md | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/540b76d2/spark-mlcontext-programming-guide.md
--
diff --git a/spark-mlcontext-programming-guide.md 
b/spark-mlcontext-programming-guide.md
index c7b2bb6..c446d1e 100644
--- a/spark-mlcontext-programming-guide.md
+++ b/spark-mlcontext-programming-guide.md
@@ -2284,6 +2284,9 @@ val numRows = 1
 val numCols = 1000
 val rawData = LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 
1).toDF()
 
+//For Spark version <= 1.6.0 you can use createDataFrame() (comment the line 
above and uncomment the line below), and for Spark version >= 1.6.1 use .toDF()
+//val rawData = 
sqlContext.createDataFrame(LinearDataGenerator.generateLinearRDD(sc, numRows, 
numCols, 1))
+
 // Repartition into a more parallelism-friendly number of partitions
 val data = rawData.repartition(64).cache()
 {% endhighlight %}



incubator-systemml git commit: Update spark-mlcontext-programming-guide.md

2016-09-16 Thread deron
Repository: incubator-systemml
Updated Branches:
  refs/heads/master b2f3fd8e0 -> c9eda508d


Update spark-mlcontext-programming-guide.md

Add code for Spark 1.6 since the example breaks on Spark 1.6

Closes #245, #246.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/c9eda508
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/c9eda508
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/c9eda508

Branch: refs/heads/master
Commit: c9eda508ddaec55e746875fd54a6e13e4ad647aa
Parents: b2f3fd8
Author: Romeo Kienzler 
Authored: Fri Sep 16 12:29:43 2016 -0700
Committer: Deron Eriksson 
Committed: Fri Sep 16 12:29:43 2016 -0700

--
 docs/spark-mlcontext-programming-guide.md | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/c9eda508/docs/spark-mlcontext-programming-guide.md
--
diff --git a/docs/spark-mlcontext-programming-guide.md 
b/docs/spark-mlcontext-programming-guide.md
index c7b2bb6..c446d1e 100644
--- a/docs/spark-mlcontext-programming-guide.md
+++ b/docs/spark-mlcontext-programming-guide.md
@@ -2284,6 +2284,9 @@ val numRows = 1
 val numCols = 1000
 val rawData = LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 
1).toDF()
 
+//For Spark version <= 1.6.0 you can use createDataFrame() (comment the line 
above and uncomment the line below), and for Spark version >= 1.6.1 use .toDF()
+//val rawData = 
sqlContext.createDataFrame(LinearDataGenerator.generateLinearRDD(sc, numRows, 
numCols, 1))
+
 // Repartition into a more parallelism-friendly number of partitions
 val data = rawData.repartition(64).cache()
 {% endhighlight %}