srowen commented on a change in pull request #24431:
[SPARK-27536][CORE][ML][SQL][STREAMING] Remove most use of
scala.language.existentials
URL: https://github.com/apache/spark/pull/24431#discussion_r278563331
##########
File path: core/src/main/scala/org/apache/spark/scheduler/ShuffleMapTask.scala
##########
@@ -85,13 +82,15 @@ private[spark] class ShuffleMapTask(
threadMXBean.getCurrentThreadCpuTime
} else 0L
val ser = SparkEnv.get.closureSerializer.newInstance()
- val (rdd, dep) = ser.deserialize[(RDD[_], ShuffleDependency[_, _, _])](
+ val rddAndDep = ser.deserialize[(RDD[_], ShuffleDependency[_, _, _])](
Review comment:
This one did, in that the return type was something like `Tuple2[RDD[_],
...]`. This is one of the cases where it comes up, when Scala tries to work out
what the type of `rdd` is, and can only figure `RDD[_]`, and will only allow
the assignment if it can infer the existence of some type that matches both
wildcards.
Now, why is it OK making the assignment from `RDD[_]` when broken out below?
I don't know. I'm not sure if it's just a common 'exception' that the compiler
makes, or whether there is some subtlety about why the above nested generic
type is different. This seemed to solve it.
This could also be more properly solved if the args to `deserialize` had
generic types. None are apparent here, though it could also have been pulled
into a method that declares their existence. That seemed like a bigger change.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]