This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new bb36695a6b94 [SPARK-54550][CONNECT] Handle `ConnectException`
gracefully in `SparkConnectStatement.close()`
bb36695a6b94 is described below
commit bb36695a6b942e5856cf2aee24bd8e9ef3b3c905
Author: vinodkc <[email protected]>
AuthorDate: Fri Nov 28 18:56:42 2025 -0800
[SPARK-54550][CONNECT] Handle `ConnectException` gracefully in
`SparkConnectStatement.close()`
### What changes were proposed in this pull request?
This PR fixes two issues in
`org.apache.spark.sql.connect.client.jdbc.SparkConnectStatement.close()`:
- Added `try-catch` to silently handle `ConnectException` during
`interruptOperation()` when the server is unavailable
- Fixed bug: changed closed = false to closed = true at the end of the
method
### Why are the changes needed?
- ConnectException handling: The `SparkConnectStatement.close()` method
should not throw exceptions during cleanup. Connection exceptions during
cleanup are not actionable and only mask more important exceptions.
- closed flag bug: Setting closed = false is incorrect and could allow
reuse of closed statements.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Existing unit tests pass
### Was this patch authored or co-authored using generative AI tooling?
No
Closes #53260 from vinodkc/br_handle_close_exception.
Authored-by: vinodkc <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
---
.../spark/sql/connect/client/jdbc/SparkConnectStatement.scala | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git
a/sql/connect/client/jdbc/src/main/scala/org/apache/spark/sql/connect/client/jdbc/SparkConnectStatement.scala
b/sql/connect/client/jdbc/src/main/scala/org/apache/spark/sql/connect/client/jdbc/SparkConnectStatement.scala
index 3df1ff65498d..d1947ae93a40 100644
---
a/sql/connect/client/jdbc/src/main/scala/org/apache/spark/sql/connect/client/jdbc/SparkConnectStatement.scala
+++
b/sql/connect/client/jdbc/src/main/scala/org/apache/spark/sql/connect/client/jdbc/SparkConnectStatement.scala
@@ -35,14 +35,21 @@ class SparkConnectStatement(conn: SparkConnectConnection)
extends Statement {
override def close(): Unit = synchronized {
if (!closed) {
if (operationId != null) {
- conn.spark.interruptOperation(operationId)
+ try {
+ conn.spark.interruptOperation(operationId)
+ } catch {
+ case _: java.net.ConnectException =>
+ // Ignore ConnectExceptions during cleanup as the operation may
have already completed
+ // or the server may be unavailable. The important part is marking
this statement
+ // as closed to prevent further use.
+ }
operationId = null
}
if (resultSet != null) {
resultSet.close()
resultSet = null
}
- closed = false
+ closed = true
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]