[ https://issues.apache.org/jira/browse/SPARK-27555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825747#comment-16825747 ]
Hyukjin Kwon commented on SPARK-27555: -------------------------------------- can you post a self-contained reproducer please? > cannot create table by using the hive default fileformat in both > hive-site.xml and spark-defaults.conf > ------------------------------------------------------------------------------------------------------ > > Key: SPARK-27555 > URL: https://issues.apache.org/jira/browse/SPARK-27555 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.3.2 > Reporter: Hui WANG > Priority: Major > > I already seen https://issues.apache.org/jira/browse/SPARK-17620 > and https://issues.apache.org/jira/browse/SPARK-18397 > and I check source code of Spark for the change of set > "spark.sql.hive.covertCTAS=true" and then spark will use > "spark.sql.sources.default" which is parquet as storage format in "create > table as select" scenario. > But my case is just create table without select. When I set > hive.default.fileformat=parquet in hive-site.xml or set > spark.hadoop.hive.default.fileformat=parquet in spark-defaults.conf, after > create a table, when i check the hive table, it still use textfile fileformat. > > It seems HiveSerDe gets the value of the hive.default.fileformat parameter > from SQLConf > The parameter values in SQLConf are copied from SparkContext's SparkConf at > SparkSession initialization, while the configuration parameters in > hive-site.xml are loaded into SparkContext's hadoopConfiguration parameters > by SharedState, And all the config with "spark.hadoop" conf are setted to > hadoopconfig, so the configuration does not take effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org