Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22674#discussion_r224756886
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala ---
    @@ -39,7 +39,22 @@ case class SparkListenerSQLExecutionStart(
     
     @DeveloperApi
     case class SparkListenerSQLExecutionEnd(executionId: Long, time: Long)
    -  extends SparkListenerEvent
    +  extends SparkListenerEvent {
    +
    +  // The name of the execution, e.g. `df.collect` will trigger a SQL 
execution with name "collect".
    +  @JsonIgnore private[sql] var executionName: Option[String] = None
    +
    +  // The following 3 fields are only accessed when `executionName` is 
defined.
    +
    +  // The duration of the SQL execution, in nanoseconds.
    +  @JsonIgnore private[sql] var duration: Long = 0L
    --- End diff --
    
    There is a test to verify it: 
https://github.com/apache/spark/pull/22674/files#diff-6fa1d00d1cb20554dda238f2a3bc3ecbR55
    
    I also used `@JsonIgnoreProperties` before, when these fields are case 
class fields. It seems we don't need `@JsonIgnoreProperties` when they are 
private `var`s.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to