Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/5371#discussion_r27783706
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -53,19 +52,19 @@ private[spark] class
MapOutputTrackerMasterActor(tracker: MapOutputTrackerMaster
val msg = s"Map output statuses were $serializedSize bytes which "
+
s"exceeds spark.akka.frameSize ($maxAkkaFrameSize bytes)."
- /* For SPARK-1244 we'll opt for just logging an error and then
throwing an exception.
- * Note that on exception the actor will just restart. A bigger
refactoring (SPARK-1239)
- * will ultimately remove this entire code path. */
+ /* For SPARK-1244 we'll opt for just logging an error and then
sending it to the sender.
+ * A bigger refactoring (SPARK-1239) will ultimately remove this
entire code path. */
val exception = new SparkException(msg)
logError(msg, exception)
- throw exception
+ context.sendFailure(exception)
--- End diff --
Such exception will be swallowed by Akka by default. I added ErrorMonitor
to handle errors caught by Akka. Although they can not be serialized, at least
we can log them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]