shivsood commented on a change in pull request #25344: [WIP][SPARK-28151][SQL] 
Mapped ByteType to TinyINT for MsSQLServerDialect
URL: https://github.com/apache/spark/pull/25344#discussion_r334710839
 
 

 ##########
 File path: 
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/MsSqlServerIntegrationSuite.scala
 ##########
 @@ -202,4 +204,25 @@ class MsSqlServerIntegrationSuite extends 
DockerJDBCIntegrationSuite {
     df2.write.jdbc(jdbcUrl, "datescopy", new Properties)
     df3.write.jdbc(jdbcUrl, "stringscopy", new Properties)
   }
+
+  test("SPARK-28151 Test write table with BYTETYPE") {
+    val tableSchema = StructType(Seq(StructField("serialNum", ByteType, true)))
+    val tableData = Seq(Row(10))
+    val df1 = spark.createDataFrame(
+      spark.sparkContext.parallelize(tableData),
+      tableSchema)
+
+    df1.write
+      .format("jdbc")
+      .mode("overwrite")
+      .option("url", jdbcUrl)
+      .option("dbtable", "testTable")
+      .save()
+    val df2 = spark.read
+      .format("jdbc")
+      .option("url", jdbcUrl)
+      .option("dbtable", "byteTable")
+      .load()
+    df2.show()
 
 Review comment:
   @srowen @dongjoon-hyun back on this after a while. I have resolved the issue 
now. The test pass successfully now. Also i have added additional test to 1. 
test a broader range of ByteType values and also check the col type post read 
to assert the returned type is ByteType. Test case now pass fully.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to