sunxiaoguang commented on code in PR #49452:
URL: https://github.com/apache/spark/pull/49452#discussion_r1912002030


##########
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/V2JDBCTest.scala:
##########
@@ -986,4 +986,18 @@ private[v2] trait V2JDBCTest extends SharedSparkSession 
with DockerIntegrationFu
   test("scan with filter push-down with date time functions") {
     testDatetime(s"$catalogAndNamespace.${caseConvert("datetime")}")
   }
+
+  test("SPARK-50792 Format binary data as a binary literal in JDBC.") {
+    withTable(s"$catalogName.test_binary_literal") {
+      // Create a table with binary column
+      val binary = "X'123456'"
+      val tableName = "test_binary_literal"
+
+      sql(s"CREATE TABLE $catalogName.$tableName (binary_col BINARY)")
+      sql(s"INSERT INTO $catalogName.$tableName VALUES ($binary)")
+
+      val select = s"SELECT * FROM $catalogName.$tableName WHERE binary_col = 
$binary"
+      assert(spark.sql(select).collect().length === 1, s"Binary literal test 
failed: $select")
+    }
+  }

Review Comment:
    Hm, Just realized I have to use Spark SQL to create table and use the types 
defined in Spark SQL to test. If I prepare table and data in tablePreparation 
and dataPreparation, that will have to be database specific. The code will 
definitely have to be duplicated for connectors of all the databases.



##########
connector/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/V2JDBCTest.scala:
##########
@@ -986,4 +986,18 @@ private[v2] trait V2JDBCTest extends SharedSparkSession 
with DockerIntegrationFu
   test("scan with filter push-down with date time functions") {
     testDatetime(s"$catalogAndNamespace.${caseConvert("datetime")}")
   }
+
+  test("SPARK-50792 Format binary data as a binary literal in JDBC.") {
+    withTable(s"$catalogName.test_binary_literal") {
+      // Create a table with binary column
+      val binary = "X'123456'"
+      val tableName = "test_binary_literal"
+
+      sql(s"CREATE TABLE $catalogName.$tableName (binary_col BINARY)")
+      sql(s"INSERT INTO $catalogName.$tableName VALUES ($binary)")
+
+      val select = s"SELECT * FROM $catalogName.$tableName WHERE binary_col = 
$binary"
+      assert(spark.sql(select).collect().length === 1, s"Binary literal test 
failed: $select")
+    }
+  }

Review Comment:
    Hm, Just realized I have to use Spark SQL to create table and use the types 
defined in Spark SQL. If I prepare table and data in tablePreparation and 
dataPreparation, that will have to be database specific. The code will 
definitely have to be duplicated for connectors of all the databases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to