Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5371#discussion_r27779407
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -53,19 +52,19 @@ private[spark] class
MapOutputTrackerMasterActor(tracker: MapOutputTrackerMaster
val msg = s"Map output statuses were $serializedSize bytes which "
+
s"exceeds spark.akka.frameSize ($maxAkkaFrameSize bytes)."
- /* For SPARK-1244 we'll opt for just logging an error and then
throwing an exception.
- * Note that on exception the actor will just restart. A bigger
refactoring (SPARK-1239)
- * will ultimately remove this entire code path. */
+ /* For SPARK-1244 we'll opt for just logging an error and then
sending it to the sender.
+ * A bigger refactoring (SPARK-1239) will ultimately remove this
entire code path. */
val exception = new SparkException(msg)
logError(msg, exception)
- throw exception
+ context.sendFailure(exception)
--- End diff --
Does Akka always serialize exceptions? There are exceptions that cannot be
serialized, and we should be careful there.
Can you add a unit test calling sendFailure with an unserializable
exception to make sure things still work?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]