spark git commit: [SPARK-15318][ML][EXAMPLE] spark.ml Collaborative Filtering example does not work in spark-shell

2016-05-17 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/branch-2.0 273f3d052 -> 670f48222


[SPARK-15318][ML][EXAMPLE] spark.ml Collaborative Filtering example does not 
work in spark-shell

## What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

copy & paste example in ml-collaborative-filtering.html into spark-shell, we 
see the following errors.
scala> case class Rating(userId: Int, movieId: Int, rating: Float, timestamp: 
Long)
defined class Rating

scala> object Rating {
def parseRating(str: String): Rating = { | val fields = str.split("::") | 
assert(fields.size == 4) | Rating(fields(0).toInt, fields(1).toInt, 
fields(2).toFloat, fields(3).toLong) | }
}
:29: error: Rating.type does not take parameters
Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, fields(3).toLong)
^
In standard scala repl, it has the same error.

Scala/spark-shell repl has some quirks (e.g. packages are also not well 
supported).

The reason of errors is that scala/spark-shell repl discards previous 
definitions when we define the Object with the same class name. Solution: We 
can rename the Object Rating.

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration tests, 
manual tests)

Manually test it: 1). ./bin/run-example ALSExample
2). copy & paste example in the generated document. It works fine.

Author: wm...@hotmail.com 

Closes #13110 from wangmiao1981/repl.

(cherry picked from commit bebe5f9811f968db92c2d33e2b30c35cfb808a4a)
Signed-off-by: Sean Owen 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/670f4822
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/670f4822
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/670f4822

Branch: refs/heads/branch-2.0
Commit: 670f482225e20d512c2c1c1fccee5b9a7d3745b0
Parents: 273f3d0
Author: wm...@hotmail.com 
Authored: Tue May 17 16:51:01 2016 +0100
Committer: Sean Owen 
Committed: Tue May 17 16:51:07 2016 +0100

--
 .../apache/spark/examples/ml/ALSExample.scala| 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/670f4822/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
--
diff --git 
a/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala 
b/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
index 6b151a6..da19ea9 100644
--- a/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
@@ -24,16 +24,21 @@ import org.apache.spark.ml.recommendation.ALS
 // $example off$
 import org.apache.spark.sql.SparkSession
 
+/**
+ * An example demonstrating ALS.
+ * Run with
+ * {{{
+ * bin/run-example ml.ALSExample
+ * }}}
+ */
 object ALSExample {
 
   // $example on$
   case class Rating(userId: Int, movieId: Int, rating: Float, timestamp: Long)
-  object Rating {
-def parseRating(str: String): Rating = {
-  val fields = str.split("::")
-  assert(fields.size == 4)
-  Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, 
fields(3).toLong)
-}
+  def parseRating(str: String): Rating = {
+val fields = str.split("::")
+assert(fields.size == 4)
+Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, 
fields(3).toLong)
   }
   // $example off$
 
@@ -46,7 +51,7 @@ object ALSExample {
 
 // $example on$
 val ratings = 
spark.read.text("data/mllib/als/sample_movielens_ratings.txt")
-  .map(Rating.parseRating)
+  .map(parseRating)
   .toDF()
 val Array(training, test) = ratings.randomSplit(Array(0.8, 0.2))
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-15318][ML][EXAMPLE] spark.ml Collaborative Filtering example does not work in spark-shell

2016-05-17 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/master 932d80029 -> bebe5f981


[SPARK-15318][ML][EXAMPLE] spark.ml Collaborative Filtering example does not 
work in spark-shell

## What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

copy & paste example in ml-collaborative-filtering.html into spark-shell, we 
see the following errors.
scala> case class Rating(userId: Int, movieId: Int, rating: Float, timestamp: 
Long)
defined class Rating

scala> object Rating {
def parseRating(str: String): Rating = { | val fields = str.split("::") | 
assert(fields.size == 4) | Rating(fields(0).toInt, fields(1).toInt, 
fields(2).toFloat, fields(3).toLong) | }
}
:29: error: Rating.type does not take parameters
Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, fields(3).toLong)
^
In standard scala repl, it has the same error.

Scala/spark-shell repl has some quirks (e.g. packages are also not well 
supported).

The reason of errors is that scala/spark-shell repl discards previous 
definitions when we define the Object with the same class name. Solution: We 
can rename the Object Rating.

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration tests, 
manual tests)

Manually test it: 1). ./bin/run-example ALSExample
2). copy & paste example in the generated document. It works fine.

Author: wm...@hotmail.com 

Closes #13110 from wangmiao1981/repl.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bebe5f98
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bebe5f98
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bebe5f98

Branch: refs/heads/master
Commit: bebe5f9811f968db92c2d33e2b30c35cfb808a4a
Parents: 932d800
Author: wm...@hotmail.com 
Authored: Tue May 17 16:51:01 2016 +0100
Committer: Sean Owen 
Committed: Tue May 17 16:51:01 2016 +0100

--
 .../apache/spark/examples/ml/ALSExample.scala| 19 ---
 1 file changed, 12 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/bebe5f98/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
--
diff --git 
a/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala 
b/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
index 6b151a6..da19ea9 100644
--- a/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
+++ b/examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala
@@ -24,16 +24,21 @@ import org.apache.spark.ml.recommendation.ALS
 // $example off$
 import org.apache.spark.sql.SparkSession
 
+/**
+ * An example demonstrating ALS.
+ * Run with
+ * {{{
+ * bin/run-example ml.ALSExample
+ * }}}
+ */
 object ALSExample {
 
   // $example on$
   case class Rating(userId: Int, movieId: Int, rating: Float, timestamp: Long)
-  object Rating {
-def parseRating(str: String): Rating = {
-  val fields = str.split("::")
-  assert(fields.size == 4)
-  Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, 
fields(3).toLong)
-}
+  def parseRating(str: String): Rating = {
+val fields = str.split("::")
+assert(fields.size == 4)
+Rating(fields(0).toInt, fields(1).toInt, fields(2).toFloat, 
fields(3).toLong)
   }
   // $example off$
 
@@ -46,7 +51,7 @@ object ALSExample {
 
 // $example on$
 val ratings = 
spark.read.text("data/mllib/als/sample_movielens_ratings.txt")
-  .map(Rating.parseRating)
+  .map(parseRating)
   .toDF()
 val Array(training, test) = ratings.randomSplit(Array(0.8, 0.2))
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org