Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3003#discussion_r19635301
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -210,25 +213,26 @@ private[spark] class Executor(
             val resultSize = serializedDirectResult.limit
     
             // directSend = sending directly back to the driver
    -        val (serializedResult, directSend) = {
    -          if (resultSize >= akkaFrameSize - AkkaUtils.reservedSizeBytes) {
    +        val serializedResult = {
    +          if (resultSize > maxResultSize) {
    +            logInfo(s"Finished $taskName (TID $taskId). result is too 
large (${resultSize} bytes),"
    --- End diff --
    
    result --> Result.
    
    Also I think it would be helpful to mention the config parameter name and 
the current setting here to aid with debuggability; e.g.,:
    
    "Finished $taskName (TID $taskID), but dropping result because the size 
($resultSize bytes) is larger than the maximum result size ($maxResultSize).  
Increase spark.driver.maxResultSize to allow larger task results."
    
    This is a longer message, but in general I think this makes it much easier 
for a user than if they have to google the error and track down the config 
option they may want to change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to