sunxiaoguang commented on code in PR #49452:
URL: https://github.com/apache/spark/pull/49452#discussion_r1912037537
##########
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/V2JDBCTest.scala:
##########
@@ -986,4 +986,18 @@ private[v2] trait V2JDBCTest extends SharedSparkSession
with DockerIntegrationFu
test("scan with filter push-down with date time functions") {
testDatetime(s"$catalogAndNamespace.${caseConvert("datetime")}")
}
+
+ test("SPARK-50792 Format binary data as a binary literal in JDBC.") {
+ withTable(s"$catalogName.test_binary_literal") {
+ // Create a table with binary column
+ val binary = "X'123456'"
+ val tableName = "test_binary_literal"
+
+ sql(s"CREATE TABLE $catalogName.$tableName (binary_col BINARY)")
+ sql(s"INSERT INTO $catalogName.$tableName VALUES ($binary)")
+
+ val select = s"SELECT * FROM $catalogName.$tableName WHERE binary_col =
$binary"
+ assert(spark.sql(select).collect().length === 1, s"Binary literal test
failed: $select")
+ }
+ }
Review Comment:
But that will duplicate the same initialization code in all integration
tests. And in future any new integration test will have to remember to add
initialization code to cover this test which is easy to forget.
With initialization logic in V2JDBCTest, the test is guaranteed to run on
all connectors. And I found there are create table and insert code in other
tests of V2JDBCTest, not sure what's the rationale behind that.
If this is the recommended way to write integration tests, we can definitely
do it this way.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]