Github user davies commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3003#discussion_r19623260
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -210,25 +213,26 @@ private[spark] class Executor(
             val resultSize = serializedDirectResult.limit
     
             // directSend = sending directly back to the driver
    -        val (serializedResult, directSend) = {
    -          if (resultSize >= akkaFrameSize - AkkaUtils.reservedSizeBytes) {
    +        val serializedResult = {
    +          if (resultSize > maxResultSize) {
    +            logInfo(s"Finished $taskName (TID $taskId). result is too 
large (${resultSize} bytes),"
    +              + " drop it")
    +            ser.serialize(new TooLargeTaskResult(resultSize))
    --- End diff --
    
    At first, I tried the way you suggested, introduced another kind of 
failure, then we need to do special things for it, because we won't retry this 
task in some way. 
    
    If size of single task is not bigger than maxResultSize, all the tasks are 
successful, but we still abort the job. So I think it's better to do similar 
things for single large task.
    
    So I think the current approach is better than introducing another type of 
failure.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to