Hongyi Zhang created SPARK-35531:
------------------------------------
Summary: Can not insert into hive bucket table if create table
with upper case schema
Key: SPARK-35531
URL: https://issues.apache.org/jira/browse/SPARK-35531
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 3.1.1
Reporter: Hongyi Zhang
create table TEST1(
V1 BIGINT,
S1 INT)
partitioned by (PK BIGINT)
clustered by (V1)
sorted by (S1)
into 200 buckets
STORED AS PARQUET;
insert into test1
select
* from values(1,1,1);
org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1 is not part
of the table columns ([FieldSchema(name:v1, type:bigint, comment:null),
FieldSchema(name:s1, type:int, comment:null)]
org.apache.spark.sql.AnalysisException:
org.apache.hadoop.hive.ql.metadata.HiveException: Bucket columns V1 is not part
of the table columns ([FieldSchema(name:v1, type:bigint, comment:null),
FieldSchema(name:s1, type:int, comment:null)]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]