Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1507#discussion_r15905625
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala 
---
    @@ -73,11 +75,16 @@ class TaskMetrics extends Serializable {
       var inputMetrics: Option[InputMetrics] = None
     
       /**
    -   * If this task reads from shuffle output, metrics on getting shuffle 
data will be collected here
    +   * If this task reads from shuffle output, metrics on getting shuffle 
data will be collected here.
    +   * This includes read metrics aggregated over all the task's shuffle 
dependencies.
        */
    -  private var _shuffleReadMetrics: Option[ShuffleReadMetrics] = None
    +  var shuffleReadMetrics: Option[ShuffleReadMetrics] = None
    --- End diff --
    
    can we add a `private[spark]` setter for it and explain it should only be 
used when recreating objects from JSON? I find it weird to expose this as a var 
since now users should not ever modify this directly - if someone looked at 
this class that would be non obvious. So I'd say we keep the var private and we 
add a private[spark] setter and say in the doc it should only be used during 
JSON deserialization.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to