[jira] [Assigned] (SPARK-23942) PySpark's collect doesn't trigger QueryExecutionListener

2018-04-09 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23942:


Assignee: (was: Apache Spark)

> PySpark's collect doesn't trigger QueryExecutionListener
> 
>
> Key: SPARK-23942
> URL: https://issues.apache.org/jira/browse/SPARK-23942
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Hyukjin Kwon
>Priority: Major
>
> For example, if you have an custom query execution listener:
> {code}
> package org.apache.spark.sql
> import org.apache.spark.internal.Logging
> import org.apache.spark.sql.execution.QueryExecution
> import org.apache.spark.sql.util.QueryExecutionListener
> class TestQueryExecutionListener extends QueryExecutionListener with Logging {
>   override def onSuccess(funcName: String, qe: QueryExecution, durationNs: 
> Long): Unit = {
> logError("Look at me! I'm 'onSuccess'")
>   }
>   override def onFailure(funcName: String, qe: QueryExecution, exception: 
> Exception): Unit = { }
> }
> {code}
> and set "spark.sql.queryExecutionListeners  
> org.apache.spark.sql.TestQueryExecutionListener",
> {code}
> >>> sql("SELECT * FROM range(1)").collect()
> [Row(id=0)]
> {code}
> {code}
> >>> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
> >>> sql("SELECT * FROM range(1)").toPandas()
>id
> 0   0
> {code}
> Seems other actions like show and etc fine in Scala side too:
> {code}
> >>> sql("SELECT * FROM range(1)").show()
> 18/04/09 17:02:04 ERROR TestQueryExecutionListener: Look at me! I'm 
> 'onSuccess'
> +---+
> | id|
> +---+
> |  0|
> +---+
> {code}
> {code}
> scala> sql("SELECT * FROM range(1)").collect()
> 18/04/09 16:58:41 ERROR TestQueryExecutionListener: Look at me! I'm 
> 'onSuccess'
> res1: Array[org.apache.spark.sql.Row] = Array([0])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-23942) PySpark's collect doesn't trigger QueryExecutionListener

2018-04-09 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-23942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-23942:


Assignee: Apache Spark

> PySpark's collect doesn't trigger QueryExecutionListener
> 
>
> Key: SPARK-23942
> URL: https://issues.apache.org/jira/browse/SPARK-23942
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Hyukjin Kwon
>Assignee: Apache Spark
>Priority: Major
>
> For example, if you have an custom query execution listener:
> {code}
> package org.apache.spark.sql
> import org.apache.spark.internal.Logging
> import org.apache.spark.sql.execution.QueryExecution
> import org.apache.spark.sql.util.QueryExecutionListener
> class TestQueryExecutionListener extends QueryExecutionListener with Logging {
>   override def onSuccess(funcName: String, qe: QueryExecution, durationNs: 
> Long): Unit = {
> logError("Look at me! I'm 'onSuccess'")
>   }
>   override def onFailure(funcName: String, qe: QueryExecution, exception: 
> Exception): Unit = { }
> }
> {code}
> and set "spark.sql.queryExecutionListeners  
> org.apache.spark.sql.TestQueryExecutionListener",
> {code}
> >>> sql("SELECT * FROM range(1)").collect()
> [Row(id=0)]
> {code}
> {code}
> >>> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
> >>> sql("SELECT * FROM range(1)").toPandas()
>id
> 0   0
> {code}
> Seems other actions like show and etc fine in Scala side too:
> {code}
> >>> sql("SELECT * FROM range(1)").show()
> 18/04/09 17:02:04 ERROR TestQueryExecutionListener: Look at me! I'm 
> 'onSuccess'
> +---+
> | id|
> +---+
> |  0|
> +---+
> {code}
> {code}
> scala> sql("SELECT * FROM range(1)").collect()
> 18/04/09 16:58:41 ERROR TestQueryExecutionListener: Look at me! I'm 
> 'onSuccess'
> res1: Array[org.apache.spark.sql.Row] = Array([0])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org