ulysses-you commented on code in PR #5496:
URL: https://github.com/apache/incubator-gluten/pull/5496#discussion_r1580346752


##########
backends-velox/src/test/scala/org/apache/spark/sql/execution/VeloxParquetWriteForHiveSuite.scala:
##########
@@ -130,60 +122,30 @@ class VeloxParquetWriteForHiveSuite extends 
GlutenQueryTest with SQLTestUtils {
       spark.sql(
         "CREATE TABLE t (c int, d long, e long)" +
           " STORED AS PARQUET partitioned by (c, d)")
-      withSQLConf("spark.sql.hive.convertMetastoreParquet" -> "true") {
-        checkNativeStaticPartitionWrite(
-          "INSERT OVERWRITE TABLE t partition(c=1, d)" +
-            " SELECT 3 as e, 2 as e",
-          native = false)
-      }
+      spark.sql(
+        "INSERT OVERWRITE TABLE t partition(c=1, d)" +
+          " SELECT 3 as e, 2 as e")
       checkAnswer(spark.table("t"), Row(3, 1, 2))
     }
   }
 
   test("test hive write table") {
     withTable("t") {
       spark.sql("CREATE TABLE t (c int) STORED AS PARQUET")
-      withSQLConf("spark.sql.hive.convertMetastoreParquet" -> "false") {
-        if (
-          SparkShimLoader.getSparkVersion.startsWith("3.4") ||
-          SparkShimLoader.getSparkVersion.startsWith("3.5")
-        ) {
-          checkNativeWrite("INSERT OVERWRITE TABLE t SELECT 1 as c", native = 
false)
-        } else {
-          checkNativeWrite("INSERT OVERWRITE TABLE t SELECT 1 as c", native = 
true)
-        }
-
-      }
+      checkNativeWrite("INSERT OVERWRITE TABLE t SELECT 1 as c")
       checkAnswer(spark.table("t"), Row(1))

Review Comment:
   The reason why Gluten with Spark 3.4 and later does not support write with 
hive format is that, Spark will convert hive table to datasource table by 
default, so if people does not change config e.g., 
`spark.sql.hive.convertMetastoreParquet`, the hive format table will never 
happen. On the otherhand, if there is a hive format table, that means people 
does not want to use Spark datasource table, so we should respect it and not 
use velox to write. In short, we only transform datasource table to native.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to