sunxiaoguang commented on code in PR #49452:
URL: https://github.com/apache/spark/pull/49452#discussion_r1911960764
##########
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/V2JDBCTest.scala:
##########
@@ -986,4 +986,18 @@ private[v2] trait V2JDBCTest extends SharedSparkSession
with DockerIntegrationFu
test("scan with filter push-down with date time functions") {
testDatetime(s"$catalogAndNamespace.${caseConvert("datetime")}")
}
+
+ test("SPARK-50792 Format binary data as a binary literal in JDBC.") {
+ withTable(s"$catalogName.test_binary_literal") {
+ // Create a table with binary column
+ val binary = "X'123456'"
+ val tableName = "test_binary_literal"
+
+ sql(s"CREATE TABLE $catalogName.$tableName (binary_col BINARY)")
+ sql(s"INSERT INTO $catalogName.$tableName VALUES ($binary)")
+
+ val select = s"SELECT * FROM $catalogName.$tableName WHERE binary_col =
$binary"
+ assert(spark.sql(select).collect().length === 1, s"Binary literal test
failed: $select")
+ }
+ }
Review Comment:
> tablePreparation
Sorry, I'm not quite familiar with the test infrastructure. In case I make
mistakes, let me confirm this question.
To mixin the tablePreparation and dataPreparation from trait defined
V2JDBCTest.scala, we need to update each integration tests and call the these
functions defined in trait.
And duplicate the extra call to multiple integration tests is OK, am I right?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]