pan3793 commented on code in PR #46052:
URL: https://github.com/apache/spark/pull/46052#discussion_r1565228786
##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala:
##########
@@ -154,6 +154,16 @@ private[spark] object HiveUtils extends Logging {
.booleanConf
.createWithDefault(true)
+ val CONVERT_INSERTING_UNPARTITIONED_TABLE =
+ buildConf("spark.sql.hive.convertInsertingUnpartitionedTable")
+ .doc("When set to true, and `spark.sql.hive.convertMetastoreParquet` or
" +
+ "`spark.sql.hive.convertMetastoreOrc` is true, the built-in
ORC/Parquet writer is used" +
+ "to process inserting into unpartitioned ORC/Parquet tables created by
using the HiveSQL " +
+ "syntax.")
+ .version("4.0.0")
+ .booleanConf
+ .createWithDefault(true)
Review Comment:
No.
Previously, Spark always converts unpartitioned Hive tables to DataSource
approach for writing when `spark.sql.hive.convertMetastore[Orc|Parquet]` is
`true`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]