WweiL commented on code in PR #42859:
URL: https://github.com/apache/spark/pull/42859#discussion_r1320397874
##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/streaming/StreamingQuery.scala:
##########
@@ -242,17 +242,15 @@ class RemoteStreamingQuery(
}
override def exception: Option[StreamingQueryException] = {
- val exception = executeQueryCmd(_.setException(true)).getException
- if (exception.hasExceptionMessage) {
- Some(
- new StreamingQueryException(
- // message maps to the return value of original
StreamingQueryException's toString method
- message = exception.getExceptionMessage,
- errorClass = exception.getErrorClass,
- stackTrace = exception.getStackTrace))
- } else {
- None
+ try {
+ // When setException is false, the server throws a
StreamingQueryException
+ // to the client.
+ executeQueryCmd(_.setException(false))
+ } catch {
+ case e: StreamingQueryException => return Some(e)
Review Comment:
Thanks for the PR!
May I know if there are reasons why we want to do this throw way rather than
have the client reconstruct the error?
I remember letting the server throw will limit the output size of the
exception to 2048 chars, which normally would be too small for the streaming
query exceptions. I was thinking this method could mitigate this situation
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]