Github user feynmanliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/7099#discussion_r33713764
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/RegressionMetrics.scala
---
@@ -31,7 +31,8 @@ import org.apache.spark.sql.DataFrame
* @param predictionAndObservations an RDD of (prediction, observation)
pairs.
*/
@Experimental
-class RegressionMetrics(predictionAndObservations: RDD[(Double, Double)])
extends Logging {
+class RegressionMetrics(predictionAndObservations: RDD[(Double, Double)])
+ extends Logging with Serializable {
--- End diff --
Not too sure why but I was getting `Task Not Serializable` errors without
this. I suspect this is because everything inside the `train()` method's
closure gets serialized.
I followed up on how lazy vals interact with serialization and found [this
SO
post](http://stackoverflow.com/questions/27882307/how-does-serialization-of-lazy-fields-work)
which says that the value is serialized iff it was computed before
serialization.
In my updated implementation, one option could be to force the evaluation
of `RegressionMetrics.summary` in the `LinearRegressionTestResults`
constructor. However, despite being serializable I don't expect this class to
be replicated anywhere except at the driver so maybe this eager evaluation is
unnecessary. @jkbradley thoughts?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]