Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/21222#discussion_r207392732
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/debug/package.scala ---
@@ -116,6 +177,30 @@ package object debug {
}
}
+ implicit class DebugStreamQuery(query: StreamingQuery) extends Logging {
+ def debug(): Unit = {
--- End diff --
Now I see why this doesn't work in continuous mode. It requires to execute
`executedPlan`. This may not work in micro batch mode either. Both micro batch
mode and continuous mode are not designed to run an internal plan multiple
times. E.g., there is a race condition that `debug` may try to run a batch that
has been cleaned up. I suggest to remove this method as the result may be
confusing.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]