srowen commented on a change in pull request #29111:
URL: https://github.com/apache/spark/pull/29111#discussion_r454628725
##########
File path:
examples/src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java
##########
@@ -23,7 +23,7 @@
import java.util.Arrays;
import java.util.List;
-import scala.collection.mutable.WrappedArray;
+import scala.collection.mutable.Seq;
Review comment:
WrappedArray is gone in 2.13; this should be an equivalent superclass
##########
File path: examples/src/main/scala/org/apache/spark/examples/SparkKMeans.scala
##########
@@ -82,7 +82,7 @@ object SparkKMeans {
while(tempDist > convergeDist) {
val closest = data.map (p => (closestPoint(p, kPoints), (p, 1)))
- val pointStats = closest.reduceByKey{case ((p1, c1), (p2, c2)) => (p1 +
p2, c1 + c2)}
+ val pointStats = closest.reduceByKey(mergeResults)
Review comment:
Not quite sure why, but a few calls to `reduceByKey` didn't like the
existing syntax in 2.13. I had to break out a typed method. `missing parameter
type for expanded function`
##########
File path: mllib/src/main/scala/org/apache/spark/ml/Estimator.scala
##########
@@ -26,7 +27,7 @@ import org.apache.spark.sql.Dataset
/**
* Abstract class for estimators that fit models to data.
*/
-abstract class Estimator[M <: Model[M]] extends PipelineStage {
+abstract class Estimator[M <: Model[M] : ClassTag] extends PipelineStage {
Review comment:
I don't quite get why 2.13 thinks this needs a ClassTag (and thus some
subclasses), but I'm just going with it. Will see if MiMa is OK with it
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]